ChatWatsonx is a wrapper for IBM watsonx.ai foundation models.The aim of these examples is to show how to communicate with
watsonx.ai
models using LangChain
LLMs API.
Overview
Integration details
Class | Package | Local | Serializable | JS support | Downloads | Version |
---|---|---|---|---|---|---|
ChatWatsonx | langchain-ibm | ❌ | ❌ | ✅ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ |
Setup
To access IBM watsonx.ai models you’ll need to create an IBM watsonx.ai account, get an API key, and install thelangchain-ibm
integration package.
Credentials
The cell below defines the credentials required to work with watsonx Foundation Model inferencing. Action: Provide the IBM Cloud user API key. For details, see Managing user API keys.Installation
The LangChain IBM integration lives in thelangchain-ibm
package:
Instantiation
You might need to adjust modelparameters
for different models or tasks. For details, refer to Available TextChatParameters.
WatsonxLLM
class with the previously set parameters.
Note:
- To provide context for the API call, you must pass the
project_id
orspace_id
. To get your project or space ID, open your project or space, go to the Manage tab, and click General. For more information see: Project documentation or Deployment space documentation. - Depending on the region of your provisioned service instance, use one of the urls listed in watsonx.ai API Authentication.
project_id
and Dallas URL.
You need to specify the model_id
that will be used for inferencing. You can find the list of all the available models in Supported chat models.
model_id
, you can also pass the deployment_id
of the previously deployed model with reference to a Prompt Template.
APIClient
object into the ChatWatsonx
class.
Invocation
To obtain completions, you can call the model directly using a string prompt.Chaining
CreateChatPromptTemplate
objects which will be responsible for creating a random question.
Streaming the Model output
You can stream the model output.Batch the Model output
You can batch the model output.Tool calling
ChatWatsonx.bind_tools()
AIMessage.tool_calls
Notice that the AIMessage has atool_calls
attribute. This contains in a standardized ToolCall format that is model-provider agnostic.
API reference
For detailed documentation of allChatWatsonx
features and configurations head to the API reference.