Rebuff
StaleDescription
An LLM prompt injection detector that combines heuristics, vector similarity, and language model-based detection to identify and block malicious prompt injection attacks.
An LLM prompt injection detector that combines heuristics, vector similarity, and language model-based detection to identify and block malicious prompt injection attacks.
The security toolkit for LLM interactions, providing prompt injection detection, PII anonymization, content safety auditing, and more to secure production LLM deployments.
An AI prompt optimizer that helps users write better prompts and achieve improved AI results.
An LLM playground you can run on your laptop. Compare models side-by-side for prompt testing and model evaluation in a local environment.
An automatic prompt optimization framework by Salesforce AI Research that leverages LLMs to search for and refine prompts for improved model performance.