AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Home / Projects / Prompt Injection Defenses

Prompt Injection Defenses

Stale
GitHub

Description

Every practical and proposed defense against prompt injection — a comprehensive reference for LLM security practitioners.

Tags

security llm prompt-engineering

Categories

🛡️ Security & Guardrails
Visit GitHub

Project Metrics

Stars 677
Forks 50
Watchers 677
Issues 7
Created April 1, 2024
Last commit February 22, 2025

Deployment

Local

Related Projects

LLM-Jailbreaks

603 ·
Stale

A comprehensive collection of LLM jailbreak techniques and prompts for ChatGPT, Claude, Llama, and other models — essential reference for LLM security research.

llmsecurityprompt-engineering

LLM Guard

2.8k · Python
Stale

The security toolkit for LLM interactions, providing prompt injection detection, PII anonymization, content safety auditing, and more to secure production LLM deployments.

securityllmpython +2

Rebuff

1.5k · TypeScript
Stale

An LLM prompt injection detector that combines heuristics, vector similarity, and language model-based detection to identify and block malicious prompt injection attacks.

securityllmtesting +2

AgentShield

510 · TypeScript
Active

AI agent security scanner that detects vulnerabilities in agent configurations, MCP servers, and tool permissions. Available as CLI, GitHub Action, and GitHub App integration.

typescriptsecurityllm +2
AgentList

Curated directory of open-source AI agent projects

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community