LiteLLM
ActiveDescription
LiteLLM provides a unified interface and proxy gateway for LLM calls, simplifying multi-model switching, routing, and cost control.
LiteLLM provides a unified interface and proxy gateway for LLM calls, simplifying multi-model switching, routing, and cost control.
Helicone is an open-source proxy and observability platform for LLM applications, offering request tracing, caching, and cost analytics.
Self-hosted, open-source AI gateway providing one API for 20+ LLM providers, databases, and files with integrated RAG, voice, and guardrails.
A high-throughput and memory-efficient inference and serving engine for LLMs, featuring PagedAttention, continuous batching, and optimized KV cache management for production deployments.
AI agent tooling for data engineering workflows, providing intelligent agent-assisted capabilities for data processing pipelines.