AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Home / Projects / Augustus

Augustus

Active
GitHub Go Apache-2.0

Description

LLM security testing framework for detecting prompt injection, jailbreaks, and adversarial attacks with 190+ probes and 28 providers in a single Go binary.

Tags

llm-security prompt-injection red-teaming jailbreak-detection go

Categories

🛡️ Security & Guardrails
Visit GitHub

Project Metrics

Stars 187
Forks 0
Watchers 0
Issues 0
Created January 1, 2025
Last commit April 21, 2026

Deployment

Local

Related Projects

CKA-Agent

197 · Python
Active

Research tool for bypassing commercial LLM guardrails to evaluate and improve the effectiveness of LLM safety defense mechanisms.

llm-securityguardrails-testingred-teaming +2

Pytector

40 · Python
Active

Easy to use LLM prompt injection detection and prompt input sanitization Python package with multiple detection methods and custom rules.

prompt-injectiondetectionsanitization +2

Garak

7.6k · HTML
Active

NVIDIA's open-source LLM vulnerability scanner that automatically detects security issues in language models including safety vulnerabilities, hallucination tendencies, jailbreak risks, and prompt injection attacks.

llm-securityvulnerability-scannerllm-evaluation +2

AI-Infra-Guard

3.5k · Python
Active

Tencent's full-stack AI red teaming platform integrating OpenClaw security scanning, agent scanning, skills scanning, MCP scanning, AI infrastructure scanning, and LLM jailbreak evaluation.

ai-securityred-teamingllm-security +2
AgentList

Curated directory of open-source AI agent projects

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community