AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Projects Open-Prompt-Injection

Open-Prompt-Injection

Stale
GitHub Python MIT

Description

An open-source benchmark for prompt injection attacks and defenses in LLMs, systematically evaluating the effectiveness of different attack strategies and defense mechanisms.

Tags

prompt-injection benchmark llm-safety security defense

Categories

🛡️ Security & Guardrails
Visit GitHub

Project Metrics

Stars 439
Forks 67
Watchers 439
Issues 14
Created October 19, 2023
Last commit October 29, 2025

Deployment

Local

Related Projects

Vigil

478 · Python
Stale

Vigil is an LLM security detection tool that identifies prompt injections, jailbreaks, and other potentially risky LLM inputs through multi-dimensional analysis for real-time safety protection.

prompt-injectionsecurityllm-safety +2

Prompt Guard

152 · Python
Active

Advanced prompt injection defense system for AI agents with multi-language detection, severity scoring, and security auditing.

prompt-injectionsecurityguardrails +2

Jailbreak LLMs

3.7k · Jupyter Notebook
Stale

A dataset of 15,140 ChatGPT prompts including 1,405 jailbreak prompts from Reddit, Discord, and other platforms, providing a large-scale benchmark for LLM safety research and jailbreak detection.

jailbreakllm-safetybenchmark +2

AgentShield Benchmark

21 · TypeScript
Active

Open benchmark for AI agent security tools, evaluating prompt injection, data exfiltration, tool abuse, and provenance tracking.

securitybenchmarkai-safety +2
AgentList

The most comprehensive directory of open-source AI Agent projects. Discover and compare top Agent frameworks like LangChain, CrewAI, and more.

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community