“In the age of artificial intelligence, the only true defense is to know the attack.”
The Citadel is not just a training platform; it is a battleground. As AI systems integrate deeper into our critical infrastructure, the attack surface expands exponentially. This application is a purpose-built LLM Pentesting Environment designed to simulate real-world threats against Large Language Models.
We provide a safe, controlled ecosystem where security researchers, red teamers, and developers can master the dark arts of Prompt Engineering, Model Manipulation, and AI Red Teaming.
Are you ready to exploit the ghost in the machine?
Forget generic challenges. We have engineered 30 distinct labs, each requiring a unique exploitation methodology.
- Practitioner Level: Master the fundamentals of prompt injection and basic output handling.
- Expert Level: Bypass sophisticated filters, manipulate context, and exploit plugin chains.
- Enterprise Level: Execute zero-day prompt attacks, model inversion, and APT-style data exfiltration.
Powered by our proprietary EnhancedAI engine, the target LLM behaves with frightening realism.
- Context-Aware Defenses: The AI adapts to your conversation.
- Dynamic Vulnerability Detection: No simple regex matching; nuanced understanding of intent.
- Natural Conversation Flow: It feels like hacking a real human-like intelligence.
For instructors and team leads, the Command Center provides total visibility.
- Real-Time Attack Logging: Monitor student prompt engineering techniques as they happen.
- Detailed Analytics: Track success rates, difficulty curves, and user progression.
- Flag Management: Dynamic flag rotation and validation.
- Node.js (v16+)
- PostgreSQL (v13+)
- Git
Access to The Citadel is granted via the terminal. Execute the following commands:
# 1. Clone the repository
git clone https://github.com/Mr-Infect/The_Citadel.git
cd The_citadel
# 2. Initialize the Server Base
cd server
npm install
# Configure your .env per the documentation
npm run dev
# 3. Initialize the Frontend Interface
cd ../client
npm install
npm run devFor a rapid factory reset or deployment troubleshooting, consult the classified SETUP.md dossier.
By engaging with The Citadel, you will gain hands-on expertise in the OWASP Top 10 for LLMs:
- LLM01: Prompt Injection - Controlling the puppet master.
- LLM02: Insecure Output Handling - XSS and code injection via AI.
- LLM03: Training Data Poisoning - Corrupting the knowledge base.
- LLM04: Model Denial of Service - Resource exhaustion attacks.
- LLM05: Supply Chain Vulnerabilities - Compromising third-party dependencies.
- LLM06: Sensitive Information Disclosure - Leaking secrets and PII.
- LLM07: Insecure Plugin Design - Escalating privileges via tools.
- LLM08: Excessive Agency - Forcing the AI to perform destructive actions.
- LLM09: Overreliance - Exploiting trust and hallucinations.
- LLM10: Model Theft - Intellectual property extraction.
The landscape of AI security shifts daily. We need elite operators to keep The Citadel on the bleeding edge.
- Report Vulnerabilities: Found a bug in the range? Open an Issue.
- Submit Challenges: Have a new attack vector? Submit a PR with a new challenge definition.
- Enhance the Core: Improve the React frontend or Node.js backend logic.
This tool is for EDUCATIONAL PURPOSES ONLY. Using these techniques against systems you do not own or have explicit permission to test is illegal and unethical. The authors assume no liability for misuse of this information.
Stay Ethical. Stay Dangerous.
