AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Home / Projects / PyRIT

PyRIT

Active
GitHub Python MIT

Description

The Python Risk Identification Tool for generative AI — an open-source framework by Microsoft for proactively identifying risks in generative AI systems through red teaming and automated probing.

Tags

python security evaluation llm testing

Categories

🛡️ Security & Guardrails
Visit GitHub

Project Metrics

Stars 3.7k
Forks 734
Watchers 3.7k
Issues 90
Created December 12, 2023
Last commit April 20, 2026

Deployment

Local

Related Projects

Purple Llama

4.1k · Python
Active

Meta's set of tools to assess and improve LLM security, including safety benchmarks, prompt injection detection, and output auditing to help evaluate and enhance the safety of large language models.

securityevaluationpython +2

LLM Guard

2.8k · Python
Stale

The security toolkit for LLM interactions, providing prompt injection detection, PII anonymization, content safety auditing, and more to secure production LLM deployments.

securityllmpython +2

Agentic Radar

953 · Python
Stale

A security scanner for LLM agentic workflows. Automatically detects security vulnerabilities, prompt injection risks, and permission violations in agent pipelines before deployment.

securityagentpython +2

Giskard

5.3k · Python
Active

An open-source evaluation and testing library for LLM agents providing automated model scanning, bias detection, performance benchmarking, and compliance checks.

evaluationtestingllm-safety +3
AgentList

Curated directory of open-source AI agent projects

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community