This will help you get started with Netmind chat models. For detailed documentation of all ChatNetmind features and configurations head to the API reference.

Overview

Integration details

ClassPackageLocalSerializableJS supportDownloadsVersion
ChatNetmindlangchain-netmindPyPI - DownloadsPyPI - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

Setup

To access Netmind models you’ll need to create a/an Netmind account, get an API key, and install the langchain-netmind integration package.

Credentials

Head to www.netmind.ai/ to sign up to Netmind and generate an API key. Once you’ve done this set the NETMIND_API_KEY environment variable:
import getpass
import os

if not os.getenv("NETMIND_API_KEY"):
    os.environ["NETMIND_API_KEY"] = getpass.getpass("Enter your Netmind API key: ")
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installation

The LangChain Netmind integration lives in the langchain-netmind package:
%pip install -qU langchain-netmind
[notice] A new release of pip is available: 24.0 -> 25.0.1
[notice] To update, run: pip install -U pip
Note: you may need to restart the kernel to use updated packages.

Instantiation

Now we can instantiate our model object and generate chat completions:
from langchain_netmind import ChatNetmind

llm = ChatNetmind(
    model="deepseek-ai/DeepSeek-V3",
    temperature=0,
    max_tokens=None,
    timeout=None,
    max_retries=2,
    # other params...
)

Invocation

messages = [
    (
        "system",
        "You are a helpful assistant that translates English to French. Translate the user sentence.",
    ),
    ("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore programmer.", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 31, 'total_tokens': 44, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'deepseek-ai/DeepSeek-V3', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-ca6c2010-844d-4bf6-baac-6e248491b000-0', usage_metadata={'input_tokens': 31, 'output_tokens': 13, 'total_tokens': 44, 'input_token_details': {}, 'output_token_details': {}})
print(ai_msg.content)
J'adore programmer.

Chaining

We can chain our model with a prompt template like so:
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate(
    [
        (
            "system",
            "You are a helpful assistant that translates {input_language} to {output_language}.",
        ),
        ("human", "{input}"),
    ]
)

chain = prompt | llm
chain.invoke(
    {
        "input_language": "English",
        "output_language": "German",
        "input": "I love programming.",
    }
)
AIMessage(content='Ich liebe es zu programmieren.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 26, 'total_tokens': 40, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'deepseek-ai/DeepSeek-V3', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-d63adcc6-53ba-4caa-9a79-78d640b39274-0', usage_metadata={'input_tokens': 26, 'output_tokens': 14, 'total_tokens': 40, 'input_token_details': {}, 'output_token_details': {}})

API reference

For detailed documentation of all ChatNetmind features and configurations head to the API reference: