LangKit
ActiveDescription
An open-source toolkit for monitoring Large Language Models, extracting signals from prompts and responses for quality and safety evaluation.
An open-source toolkit for monitoring Large Language Models, extracting signals from prompts and responses for quality and safety evaluation.
OpenTelemetry instrumentation for AI observability, providing standardized tracing, metrics collection, and span definitions for LLM inference processes to help developers monitor and debug AI agent systems.
Meta's set of tools to assess and improve LLM security, including safety benchmarks, prompt injection detection, and output auditing to help evaluate and enhance the safety of large language models.
An open-source tool from Meta for LLM prompt optimization. Automates the process of continuously improving and refining LLM prompts.
A toolkit by Weights & Biases for developing AI-powered applications, providing LLM call tracing, evaluation experiment management, and versioning from prototype to production.