Pytector
ActiveDescription
Easy to use LLM prompt injection detection and prompt input sanitization Python package with multiple detection methods and custom rules.
Easy to use LLM prompt injection detection and prompt input sanitization Python package with multiple detection methods and custom rules.
The fastest Trust Layer for AI Agents with prompt injection detection, PII filtering, and content safety guardrails.
Lightweight prompt injection detection for LLM applications providing simple and efficient input safety validation.
Open-source security gateway for LLM APIs with prompt injection detection, PII redaction, dangerous response filtering, and more.
Research project investigating LLM security by performing binary classification for prompt injection attack detection and analysis.