ChatLiteLLM
: The main LangChain wrapper for basic usage of LiteLLM (docs).ChatLiteLLMRouter
: AChatLiteLLM
wrapper that leverages LiteLLM’s Router (docs).
Table of Contents
- Overview
- Setup
- Credentials
- Installation
- Instantiation
- Invocation
- Async and Streaming Functionality
- API Reference
Overview
Integration details
Class | Package | Local | Serializable | JS support | Downloads | Version |
---|---|---|---|---|---|---|
ChatLiteLLM | langchain-litellm | ❌ | ❌ | ❌ | ||
ChatLiteLLMRouter | langchain-litellm | ❌ | ❌ | ❌ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
Setup
To accessChatLiteLLM
and ChatLiteLLMRouter
models, you’ll need to install the langchain-litellm
package and create an OpenAI, Anthropic, Azure, Replicate, OpenRouter, Hugging Face, Together AI, or Cohere account. Then, you have to get an API key and export it as an environment variable.
Credentials
You have to choose the LLM provider you want and sign up with them to get their API key.Example - Anthropic
Head to console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you’ve done this, set the ANTHROPIC_API_KEY environment variable.Example - OpenAI
Head to platform.openai.com/api-keys to sign up for OpenAI and generate an API key. Once you’ve done this, set the OPENAI_API_KEY environment variable.Installation
The LangChain LiteLLM integration is available in thelangchain-litellm
package:
Instantiation
ChatLiteLLM
You can instantiate aChatLiteLLM
model by providing a model
name supported by LiteLLM.
ChatLiteLLMRouter
You can also leverage LiteLLM’s routing capabilities by defining your model list as specified here.Invocation
Whether you’ve instantiated aChatLiteLLM
or a ChatLiteLLMRouter
, you can now use the ChatModel through LangChain’s API.
Async and Streaming Functionality
ChatLiteLLM
and ChatLiteLLMRouter
also support async and streaming functionality:
API reference
For detailed documentation of allChatLiteLLM
and ChatLiteLLMRouter
features and configurations, head to the API reference: github.com/Akshay-Dongare/langchain-litellm