Open-source tools for making AI apps safe.
Website · Docs · Discord · HuggingFace
We build tools to make AI agents safer. Block prompt injections, redact sensitive data, and sandbox coding agents.
| Tool | Description | |
|---|---|---|
| Superagent SDK | Detect and block prompt injections, redact PII and secrets, scan repos for threats | |
| VibeKit | Run coding agents in isolated sandboxes with data redaction and observability | |
| Grok CLI | AI agent that brings Grok directly into your terminal | |
| ReAG | Reasoning Augmented Generation. Query documents with full context, not chunked embeddings |
We publish open-weight guardrail models on HuggingFace. These models detect prompt injections and unsafe inputs at runtime. Run them on your infrastructure (CPU or GPU) with 50-100ms latency. No API calls, no data leaving your environment.
We offer red team testing for AI agents. We attack your production system to surface vulnerabilities, then give you the evidence to prove safety to your customers.
