AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Home / Projects / LLM-Jailbreaks

LLM-Jailbreaks

Stale
GitHub Apache-2.0

Description

A comprehensive collection of LLM jailbreak techniques and prompts for ChatGPT, Claude, Llama, and other models — essential reference for LLM security research.

Tags

llm security prompt-engineering

Categories

🛡️ Security & Guardrails
Visit GitHub

Project Metrics

Stars 603
Forks 55
Watchers 603
Issues 1
Created April 21, 2024
Last commit April 13, 2025

Deployment

Local

Related Projects

LLM Guard

2.8k · Python
Stale

The security toolkit for LLM interactions, providing prompt injection detection, PII anonymization, content safety auditing, and more to secure production LLM deployments.

securityllmpython +2

Rebuff

1.5k · TypeScript
Stale

An LLM prompt injection detector that combines heuristics, vector similarity, and language model-based detection to identify and block malicious prompt injection attacks.

securityllmtesting +2

Prompt Injection Defenses

677 ·
Stale

Every practical and proposed defense against prompt injection — a comprehensive reference for LLM security practitioners.

securityllmprompt-engineering

AgentShield

510 · TypeScript
Active

AI agent security scanner that detects vulnerabilities in agent configurations, MCP servers, and tool permissions. Available as CLI, GitHub Action, and GitHub App integration.

typescriptsecurityllm +2
AgentList

Curated directory of open-source AI agent projects

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community