LLAMATOR
ActiveDescription
A Python red teaming framework for testing chatbots and GenAI systems, helping security teams discover and fix security vulnerabilities in AI systems.
A Python red teaming framework for testing chatbots and GenAI systems, helping security teams discover and fix security vulnerabilities in AI systems.
NVIDIA's open-source LLM vulnerability scanner that automatically detects security issues in language models including safety vulnerabilities, hallucination tendencies, jailbreak risks, and prompt injection attacks.
AI and LLM Red Team Field Manual and Consultant's Handbook, systematically covering red team assessment methodologies, attack techniques, and defense strategies.
Tencent's full-stack AI red teaming platform integrating OpenClaw security scanning, agent scanning, skills scanning, MCP scanning, AI infrastructure scanning, and LLM jailbreak evaluation.
An open-source platform for automatically testing AI agent security. Identifies vulnerabilities such as prompt injection, secret leakage, and system instruction exposure.