AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Projects AI LLM Red Team Handbook

AI LLM Red Team Handbook

Active
GitHub Python NOASSERTION

Description

AI and LLM Red Team Field Manual and Consultant's Handbook, systematically covering red team assessment methodologies, attack techniques, and defense strategies.

Tags

red-team handbook llm-security attack-techniques defense

Categories

🛡️ Security & Guardrails
Visit GitHub

Project Metrics

Stars 256
Forks 49
Watchers 256
Issues 0
Created November 27, 2025
Last commit May 8, 2026

Deployment

Local

Related Projects

FuzzyAI

1.4k · Jupyter Notebook
Stale

An automated LLM fuzzing tool by CyberArk that helps developers and security researchers identify and mitigate jailbreak vulnerabilities in LLM APIs with multiple attack vectors.

fuzzingllm-securityjailbreak +2

Dropbox LLM Security

258 · Python
Stale

Open-source LLM security research code and results from Dropbox, covering LLM security testing methods, vulnerability analysis, and defense strategies.

llm-securityresearchvulnerability-analysis +2

AIGoat

53 · JavaScript
Active

Open-source AI security playground for LLM red teaming with hands-on labs covering the full OWASP LLM Top 10 with progressive defenses.

ai-safetyred-teamingowasp +2

CKA-Agent

203 · Python
Active

Research tool for bypassing commercial LLM guardrails to evaluate and improve the effectiveness of LLM safety defense mechanisms.

llm-securityguardrails-testingred-teaming +2
AgentList

The most comprehensive directory of open-source AI Agent projects. Discover and compare top Agent frameworks like LangChain, CrewAI, and more.

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community