AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Home / Projects / Awesome-LLM-Safety

Awesome-LLM-Safety

Normal
GitHub HTML

Description

A curated collection of safety-related papers, articles, and resources focused on Large Language Models — comprehensive reference for researchers and practitioners exploring LLM safety implications and advancements.

Tags

llm security evaluation

Categories

🛡️ Security & Guardrails
Visit GitHub

Project Metrics

Stars 1.8k
Forks 150
Watchers 1.8k
Issues 5
Created August 1, 2023
Last commit March 15, 2026

Deployment

Local

Related Projects

Purple Llama

4.1k · Python
Active

Meta's set of tools to assess and improve LLM security, including safety benchmarks, prompt injection detection, and output auditing to help evaluate and enhance the safety of large language models.

securityevaluationpython +2

PyRIT

3.7k · Python
Active

The Python Risk Identification Tool for generative AI — an open-source framework by Microsoft for proactively identifying risks in generative AI systems through red teaming and automated probing.

pythonsecurityevaluation +2

AgentShield

510 · TypeScript
Active

AI agent security scanner that detects vulnerabilities in agent configurations, MCP servers, and tool permissions. Available as CLI, GitHub Action, and GitHub App integration.

typescriptsecurityllm +2

PentestGPT

12.7k · Python
Normal

An automated penetration testing agentic framework powered by large language models for security testing and vulnerability discovery.

penetration-testingsecurityllm +2
AgentList

Curated directory of open-source AI agent projects

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community