A comprehensive methodology for verifying claims in social media posts using AI assistance. This skill helps identify claims, gather evidence, analyze source independence, detect rhetorical fallacies, and provide clear verification reports.
This skill provides a structured framework for fact-checking social media posts (50-500 words) by:
- Identifying the primary claim in a post
- Gathering supporting and disputing evidence from multiple sources
- Analyzing source independence and circular citations (CRITICAL)
- Detecting rhetorical fallacies and manipulation tactics
- Evaluating ideological alignment of sources
- Reporting findings in a clear, actionable format
This project demonstrates several key competencies in AI development and prompt engineering:
Structured Prompt Engineering: Shows how to design multi-step AI workflows with clear checkpoints, validation steps, and error handling. The methodology breaks down a complex task (fact-checking) into discrete, manageable steps that AI can execute consistently.
Multi-Step AI Workflows: Demonstrates orchestrating AI through a complex process—claim identification → triage → evidence gathering → source analysis → fallacy detection → reporting—where each step builds on previous outputs.
Documentation Best Practices: Provides multiple user paths (Claude Skills, copy-paste prompts, manual methodology) showing how to make AI tools accessible to different user segments with varying technical capabilities and subscription levels.
AI Safety & Limitations: Explicitly addresses AI limitations, potential failures, and the need for human oversight—critical for production AI applications.
Real-World Application: Tackles a genuine problem (misinformation) with practical constraints (paywalls, temporal limitations, unfalsifiable claims) rather than theoretical/toy examples.
Note: Claude Skills require a Claude Pro subscription ($20/month) or Team plan.
- Download the
SKILL.mdfile from this repository - Go to your Claude.ai account
- Navigate to Settings → Skills → Upload Skill
- Upload the
SKILL.mdfile - Start a conversation and share a social media post or claim to verify
This provides the smoothest experience - Claude will automatically apply the methodology when you share claims.
For users without Claude Pro, use the ready-made prompt version:
- Download
PROMPT.txtfrom this repository - Copy the entire contents
- Paste it at the start of a conversation with any AI assistant that has web search:
- Free Claude.ai users
- ChatGPT (with web browsing enabled)
- Perplexity AI
- Other AI assistants
- Then share the post or claim you want fact-checked
See detailed instructions in Free User Guide.
Use the methodology as a checklist for manual verification:
- Read through
SKILL.mdto understand the process - Follow each step manually using your own research
- Great for teaching media literacy or when AI isn't available
For ChatGPT Custom GPTs or other integrations:
- Use
SKILL.mdas a system prompt - Adapt the format to your platform's requirements
- Ensure web search capability is enabled
You: Can you fact-check this post?
"BREAKING: New study shows coffee consumption reduces cancer risk by 80%.
Researchers at major university confirm what coffee lovers have known all along!"
AI: [Follows the fact-check methodology to verify the claim]
See the examples folder for detailed fact-check demonstrations.
Parses posts to identify the PRIMARY assertion, ensuring focus on what matters most.
Quickly categorizes claims as:
- Uncontroversial: Widely accepted facts
- Disputed: Claims needing verification
- Misdirected controversy: Sound main claims with contentious sub-claims
Uses web search to find:
- Supporting sources
- Disputing sources
- Original/primary sources
- Publication dates and authorship
The most important component:
- Identifies circular citations
- Distinguishes primary vs. secondary sources
- Creates citation chains
- Flags when "multiple sources" are really one source cited repeatedly
Notes if sources cluster along ideological/political lines, helping identify potential bias.
Aggressively flags:
- Headline-text mismatches
- Appeals to authority
- Cherry-picking
- False equivalences
- Correlation/causation errors
- Emotional manipulation
- And more...
Delivers findings in a structured markdown report with confidence levels and caveats.
Each fact-check produces a report with:
- Main Claim Identified
- Triage (controversial or not)
- Evidence Summary (supporting and disputing)
- Source Independence Analysis (critical for validity)
- Ideological Alignment (potential bias indicators)
- Rhetorical Issues (manipulation tactics)
- Bottom Line (clear verdict with confidence level)
This methodology has inherent limitations:
- Cannot access paywalled content
- Temporal limitations (only finds currently indexed sources)
- Cannot definitively prove/disprove all claims
- Some claims may be unfalsifiable or opinion-based
- AI can make mistakes - users should verify important claims themselves
See CONTRIBUTING.md for guidelines on improving this skill.
This project is licensed under the MIT License - see the LICENSE file for details.
Personal & Social:
- Verifying viral claims before sharing
- Understanding if news stories have independent confirmation
- Evaluating the credibility of health/science claims
Professional Applications:
- Content Moderation: Platforms evaluating user-generated content for misinformation
- Journalism: News organizations fact-checking claims in real-time
- Research: Academic researchers analyzing information spread patterns
- AI Safety: Organizations building safety layers for AI-generated content
Educational:
- Teaching critical thinking and media literacy
- Training students to evaluate sources and identify fallacies
- Demonstrating practical AI prompt engineering techniques
Technical/Development:
- Reference implementation for structured AI workflows
- Example of documentation for multi-audience AI tools
- Template for building verification systems with LLMs
This skill emphasizes source independence analysis because many fact-checking efforts fail to distinguish between multiple independent sources and circular citations of a single source.
Found an issue or have suggestions? Please open an issue on GitHub.
Disclaimer: This is a methodology for AI-assisted fact-checking. Always verify important claims through multiple independent sources and use critical thinking. AI can make mistakes.