AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Projects AgentDojo

AgentDojo

Normal
GitHub Python MIT

Description

A dynamic environment by ETH Zurich to evaluate attacks and defenses for LLM agents, providing standardized benchmarks for measuring agent system security.

Tags

security-benchmark agent-evaluation attack-defense llm-safety red-team

Categories

🛡️ Security & Guardrails
Visit GitHub Visit Website

Project Metrics

Stars 560
Forks 145
Watchers 560
Issues 25
Created February 29, 2024
Last commit March 30, 2026

Deployment

Local

Related Projects

EasyJailbreak

851 · Python
Normal

An easy-to-use Python framework for generating adversarial jailbreak prompts, helping researchers systematically evaluate LLM safety defenses with multiple attack method combinations.

jailbreakadversarialllm-safety +2

AI Red Teaming Playground Labs

1.9k · TypeScript
Normal

Microsoft's open-source AI red teaming playground labs with infrastructure for running AI red teaming trainings and hands-on security exercises.

red-teamtrainingsecurity +2

SCAM

105 · Python
Normal

Security Comprehension Awareness Measure by 1Password. An open-source benchmark testing AI agents' security awareness during realistic, multi-turn workplace tasks.

security-benchmarkagent-safetyworkplace +2

Giskard

5.3k · Python
Active

An open-source evaluation and testing library for LLM agents providing automated model scanning, bias detection, performance benchmarking, and compliance checks.

evaluationtestingllm-safety +3
AgentList

The most comprehensive directory of open-source AI Agent projects. Discover and compare top Agent frameworks like LangChain, CrewAI, and more.

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community