Overview
TheNebiusEmbeddings
class provides access to Nebius AI Studio’s embedding models through LangChain. These embeddings can be used for semantic search, document similarity, and other NLP tasks requiring vector representations of text.
Integration details
- Provider: Nebius AI Studio
- Model Types: Text embedding models
- Primary Use Case: Generate vector representations of text for semantic similarity and retrieval
- Available Models: Various embedding models including BAAI/bge-en-icl and others
- Dimensions: Varies by model (typically 1024-4096 dimensions)
Setup
Installation
The Nebius integration can be installed via pip:Credentials
Nebius requires an API key that can be passed as an initialization parameterapi_key
or set as the environment variable NEBIUS_API_KEY
. You can obtain an API key by creating an account on Nebius AI Studio.
Instantiation
TheNebiusEmbeddings
class can be instantiated with optional parameters for the API key and model name:
Available Models
The list of supported models is available at studio.nebius.com/?modality=embeddingIndexing and Retrieval
Embedding models are often used in retrieval-augmented generation (RAG) flows, both for indexing data and later retrieving it. The following example demonstrates how to useNebiusEmbeddings
with a vector store for document retrieval.
Using with InMemoryVectorStore
You can also use theInMemoryVectorStore
for lightweight applications:
Direct Usage
You can directly use theNebiusEmbeddings
class to generate embeddings for text without using a vector store.
Embedding a Single Text
You can use theembed_query
method to embed a single piece of text:
Embedding Multiple Texts
You can embed multiple texts at once using theembed_documents
method: