AgentList
HomeProjectsArticlesAbout
Explore Projects
HomeProjectsArticlesAbout
Explore Projects
Home / Projects / Rebuff

Rebuff

Stale
GitHub TypeScript Apache-2.0

Description

An LLM prompt injection detector that combines heuristics, vector similarity, and language model-based detection to identify and block malicious prompt injection attacks.

Tags

security llm testing prompt-engineering typescript

Categories

📊 Observability
Visit GitHub Visit Website

Project Metrics

Stars 1.5k
Forks 132
Watchers 1.5k
Issues 33
Created April 24, 2023
Last commit August 7, 2024

Deployment

Local

Related Projects

LLM Guard

2.8k · Python
Stale

The security toolkit for LLM interactions, providing prompt injection detection, PII anonymization, content safety auditing, and more to secure production LLM deployments.

securityllmpython +2

Prompt Optimizer

26.5k · TypeScript
Active

An AI prompt optimizer that helps users write better prompts and achieve improved AI results.

prompt-engineeringevaluationllm +2

OpenPlayground

6.4k · TypeScript
Normal

An LLM playground you can run on your laptop. Compare models side-by-side for prompt testing and model evaluation in a local environment.

llmtoolstypescript +2

PrompToMatix

948 · Python
Stale

An automatic prompt optimization framework by Salesforce AI Research that leverages LLMs to search for and refine prompts for improved model performance.

prompt-engineeringevaluationllm +1
AgentList

Curated directory of open-source AI agent projects

Quick Links

  • Project List
  • Featured Articles
  • Browse Categories

Contact

  • About
  • Privacy Policy
  • Contact Us

© 2026 AgentList. All rights reserved.

Made with for the open source community