AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Projects Vigil

Vigil

Stale
GitHub Python Apache-2.0

Description

Vigil is an LLM security detection tool that identifies prompt injections, jailbreaks, and other potentially risky LLM inputs through multi-dimensional analysis for real-time safety protection.

Tags

prompt-injection security llm-safety detection guardrails

Categories

🛡️ Security & Guardrails
Visit GitHub Visit Website

Project Metrics

Stars 478
Forks 54
Watchers 478
Issues 17
Created September 4, 2023
Last commit January 31, 2024

Deployment

Local

Related Projects

Prompt Guard

152 · Python
Active

Advanced prompt injection defense system for AI agents with multi-language detection, severity scoring, and security auditing.

prompt-injectionsecurityguardrails +2

Open-Prompt-Injection

439 · Python
Stale

An open-source benchmark for prompt injection attacks and defenses in LLMs, systematically evaluating the effectiveness of different attack strategies and defense mechanisms.

prompt-injectionbenchmarkllm-safety +2

Pytector

40 · Python
Active

Easy to use LLM prompt injection detection and prompt input sanitization Python package with multiple detection methods and custom rules.

prompt-injectiondetectionsanitization +2

NeMo Guardrails

6.1k · Python
Active

NVIDIA NeMo Guardrails is an open-source toolkit for adding programmable guardrails to LLM-based conversational systems, supporting topic control, safety enforcement, and dialog guidance.

guardrailsllm-safetynvidia +2
AgentList

The most comprehensive directory of open-source AI Agent projects. Discover and compare top Agent frameworks like LangChain, CrewAI, and more.

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community