LLM-Jailbreaks
StaleDescription
A comprehensive collection of LLM jailbreak techniques and prompts for ChatGPT, Claude, Llama, and other models — essential reference for LLM security research.
A comprehensive collection of LLM jailbreak techniques and prompts for ChatGPT, Claude, Llama, and other models — essential reference for LLM security research.
The security toolkit for LLM interactions, providing prompt injection detection, PII anonymization, content safety auditing, and more to secure production LLM deployments.
An LLM prompt injection detector that combines heuristics, vector similarity, and language model-based detection to identify and block malicious prompt injection attacks.
Every practical and proposed defense against prompt injection — a comprehensive reference for LLM security practitioners.
AI agent security scanner that detects vulnerabilities in agent configurations, MCP servers, and tool permissions. Available as CLI, GitHub Action, and GitHub App integration.