contextual-client
Python SDK. Learn more about it here.
Overview
This integration invokes Contextual AI’s Grounded Language Model.Integration details
Class | Package | Local | Serializable | JS support | Downloads | Version |
---|---|---|---|---|---|---|
ChatContextual | langchain-contextual | ❌ | beta | ❌ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
Setup
To access Contextual models you’ll need to create a Contextual AI account, get an API key, and install thelangchain-contextual
integration package.
Credentials
Head to app.contextual.ai to sign up to Contextual and generate an API key. Once you’ve done this set the CONTEXTUAL_AI_API_KEY environment variable:Installation
The LangChain Contextual integration lives in thelangchain-contextual
package:
Instantiation
Now we can instantiate our model object and generate chat completions. The chat client can be instantiated with these following additional settings:Parameter | Type | Description | Default |
---|---|---|---|
temperature | Optional[float] | The sampling temperature, which affects the randomness in the response. Note that higher temperature values can reduce groundedness. | 0 |
top_p | Optional[float] | A parameter for nucleus sampling, an alternative to temperature which also affects the randomness of the response. Note that higher top_p values can reduce groundedness. | 0.9 |
max_new_tokens | Optional[int] | The maximum number of tokens that the model can generate in the response. Minimum is 1 and maximum is 2048. | 1024 |
Invocation
The Contextual Grounded Language Model accepts additionalkwargs
when calling the ChatContextual.invoke
method.
These additional inputs are:
Parameter | Type | Description |
---|---|---|
knowledge | list[str] | Required: A list of strings of knowledge sources the grounded language model can use when generating a response. |
system_prompt | Optional[str] | Optional: Instructions the model should follow when generating responses. Note that we do not guarantee that the model follows these instructions exactly. |
avoid_commentary | Optional[bool] | Optional (Defaults to False ): Flag to indicate whether the model should avoid providing additional commentary in responses. Commentary is conversational in nature and does not contain verifiable claims; therefore, commentary is not strictly grounded in available context. However, commentary may provide useful context which improves the helpfulness of responses. |