LLM Guard
StaleDescription
The security toolkit for LLM interactions, providing prompt injection detection, PII anonymization, content safety auditing, and more to secure production LLM deployments.
The security toolkit for LLM interactions, providing prompt injection detection, PII anonymization, content safety auditing, and more to secure production LLM deployments.
An LLM prompt injection detector that combines heuristics, vector similarity, and language model-based detection to identify and block malicious prompt injection attacks.
An automatic prompt optimization framework by Salesforce AI Research that leverages LLMs to search for and refine prompts for improved model performance.
Meta's set of tools to assess and improve LLM security, including safety benchmarks, prompt injection detection, and output auditing to help evaluate and enhance the safety of large language models.
An open-source tool from Meta for LLM prompt optimization. Automates the process of continuously improving and refining LLM prompts.