LocalAI
ActiveDescription
Open-source AI engine to run any model — LLMs, vision, voice, image, video — on any hardware without GPU. Provides OpenAI-compatible API for fully local, privacy-first AI inference.
Open-source AI engine to run any model — LLMs, vision, voice, image, video — on any hardware without GPU. Provides OpenAI-compatible API for fully local, privacy-first AI inference.
A comprehensive single-package Retrieval-Augmented Generation platform built on Langflow, Docling, and OpenSearch, providing a complete pipeline from document parsing to vector retrieval and generation with multi-model and multi-vector-database support.
Opinionated RAG framework for integrating GenAI into your apps. Works with any LLM, any vectorstore, any files — so you can focus on your product instead of building RAG pipelines.
Run any open-source LLMs such as DeepSeek and Llama as OpenAI-compatible API endpoints in the cloud. Supports fine-tuning, quantization, and distributed inference for production-grade LLM deployment.
A local knowledge base RAG and Agent application platform built on Langchain with support for ChatGLM, Qwen, Llama and other LLMs, offering conversation, knowledge base management, and agent capabilities.