Intel Extension for PyTorch LLM
正常简介
Accelerate local LLM inference and finetuning on Intel XPU. Supports LLaMA, Mistral, Qwen, DeepSeek and more. Seamlessly integrates with LangChain, LlamaIndex, and other agent frameworks.
Accelerate local LLM inference and finetuning on Intel XPU. Supports LLaMA, Mistral, Qwen, DeepSeek and more. Seamlessly integrates with LangChain, LlamaIndex, and other agent frameworks.
A local knowledge base RAG and Agent application platform built on Langchain with support for ChatGLM, Qwen, Llama and other LLMs, offering conversation, knowledge base management, and agent capabilities.
The easiest way to use Agentic RAG in any enterprise. Provides out-of-the-box retrieval-augmented generation capabilities with Docker-based deployment for simplified enterprise RAG application building and management.
A MemAgent framework that can extrapolate to 3.5M context tokens, along with a training framework for RL training of any agent workflow.
Opinionated RAG framework for integrating GenAI into your apps. Works with any LLM, any vectorstore, any files — so you can focus on your product instead of building RAG pipelines.