Awesome-LLM-Safety
正常简介
A curated collection of safety-related papers, articles, and resources focused on Large Language Models — comprehensive reference for researchers and practitioners exploring LLM safety implications and advancements.
A curated collection of safety-related papers, articles, and resources focused on Large Language Models — comprehensive reference for researchers and practitioners exploring LLM safety implications and advancements.
Meta's set of tools to assess and improve LLM security, including safety benchmarks, prompt injection detection, and output auditing to help evaluate and enhance the safety of large language models.
The Python Risk Identification Tool for generative AI — an open-source framework by Microsoft for proactively identifying risks in generative AI systems through red teaming and automated probing.
AI agent security scanner that detects vulnerabilities in agent configurations, MCP servers, and tool permissions. Available as CLI, GitHub Action, and GitHub App integration.
基于大语言模型的自动化渗透测试 Agent 框架,利用 LLM 驱动安全测试和漏洞发现。