AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Home / Projects / LLM Security Prompt Injection

LLM Security Prompt Injection

Active
GitHub Jupyter Notebook MIT

Description

Research project investigating LLM security by performing binary classification for prompt injection attack detection and analysis.

Tags

prompt-injection llm-security classification research python

Categories

🛡️ Security & Guardrails
Visit GitHub

Project Metrics

Stars 62
Forks 0
Watchers 0
Issues 0
Created January 1, 2025
Last commit April 21, 2026

Deployment

Local

Related Projects

Pytector

40 · Python
Active

Easy to use LLM prompt injection detection and prompt input sanitization Python package with multiple detection methods and custom rules.

prompt-injectiondetectionsanitization +2

ZenGuard AI

150 · Python
Active

The fastest Trust Layer for AI Agents with prompt injection detection, PII filtering, and content safety guardrails.

llm-securityguardrailsprompt-injection +2

AegisGate

34 · Python
Active

Open-source security gateway for LLM APIs with prompt injection detection, PII redaction, dangerous response filtering, and more.

llm-securitygatewayprompt-injection +2

Agentic AI Security Starter Kit

12 · Python
Active

Working code examples to defend against Agentic AI threats including prompt injection detection, Claude Code security configuration, and agent access control.

agent-securityprompt-injectionaccess-control +2
AgentList

Curated directory of open-source AI agent projects

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community