Augustus
ActiveDescription
LLM security testing framework for detecting prompt injection, jailbreaks, and adversarial attacks with 190+ probes and 28 providers in a single Go binary.
LLM security testing framework for detecting prompt injection, jailbreaks, and adversarial attacks with 190+ probes and 28 providers in a single Go binary.
Research tool for bypassing commercial LLM guardrails to evaluate and improve the effectiveness of LLM safety defense mechanisms.
Easy to use LLM prompt injection detection and prompt input sanitization Python package with multiple detection methods and custom rules.
NVIDIA's open-source LLM vulnerability scanner that automatically detects security issues in language models including safety vulnerabilities, hallucination tendencies, jailbreak risks, and prompt injection attacks.
Tencent's full-stack AI red teaming platform integrating OpenClaw security scanning, agent scanning, skills scanning, MCP scanning, AI infrastructure scanning, and LLM jailbreak evaluation.