AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Home / Projects / LLM Guard

LLM Guard

Stale
GitHub Python MIT

Description

The security toolkit for LLM interactions, providing prompt injection detection, PII anonymization, content safety auditing, and more to secure production LLM deployments.

Tags

security llm python testing prompt-engineering

Categories

📊 Observability
Visit GitHub Visit Website

Project Metrics

Stars 2.8k
Forks 375
Watchers 2.8k
Issues 36
Created July 27, 2023
Last commit December 15, 2025

Deployment

Local

Related Projects

Rebuff

1.5k · TypeScript
Stale

An LLM prompt injection detector that combines heuristics, vector similarity, and language model-based detection to identify and block malicious prompt injection attacks.

securityllmtesting +2

PrompToMatix

948 · Python
Stale

An automatic prompt optimization framework by Salesforce AI Research that leverages LLMs to search for and refine prompts for improved model performance.

prompt-engineeringevaluationllm +1

Purple Llama

4.1k · Python
Active

Meta's set of tools to assess and improve LLM security, including safety benchmarks, prompt injection detection, and output auditing to help evaluate and enhance the safety of large language models.

securityevaluationpython +2

Prompt Ops

800 · Python
Active

An open-source tool from Meta for LLM prompt optimization. Automates the process of continuously improving and refining LLM prompts.

prompt-engineeringllmtools +2
AgentList

Curated directory of open-source AI agent projects

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community