Google Cloud Vertex Feature Store streamlines your ML feature management and online serving processes by letting you serve at low-latency your data in Google Cloud BigQuery, including the capacity to perform approximate neighbor retrieval for embeddingsThis tutorial shows you how to easily perform low-latency vector search and approximate nearest neighbor retrieval directly from your BigQuery data, enabling powerful ML applications with minimal setup. We will do that using the
VertexFSVectorStore
class.
This class is part of a set of 2 classes capable of providing a unified data storage and flexible vector search in Google Cloud:
- BigQuery Vector Search: with
BigQueryVectorStore
class, which is ideal for rapid prototyping with no infrastructure setup and batch retrieval. - Feature Store Online Store: with
VertexFSVectorStore
class, enables low-latency retrieval with manual or scheduled data sync. Perfect for production-ready user-facing GenAI applications.
Getting started
Install the library
Before you begin
Set your project ID
If you don’t know your project ID, try the following:- Run
gcloud config list
. - Run
gcloud projects list
. - See the support page: Locate the project ID.
Set the region
You can also change theREGION
variable used by BigQuery. Learn more about BigQuery regions.
Set the dataset and table names
They will be your BigQuery Vector Store.Authenticating your notebook environment
- If you are using Colab to run this notebook, uncomment the cell below and continue.
- If you are using Vertex AI Workbench, check out the setup instructions here.
Demo: VertexFSVectorStore
Create an embedding class instance
You may need to enable Vertex AI API in your project by runninggcloud services enable aiplatform.googleapis.com --project {PROJECT_ID}
(replace {PROJECT_ID}
with the name of your project).
You can use any LangChain embeddings model.
Initialize VertexFSVectorStore
BigQuery Dataset and Table will be automatically created if they do not exist. See class definition here for all optional paremeters.Add texts
Note: The first synchronization process will take around ~20 minutes because of Feature Online Store creation.
sync_data
method.
cron_schedule
class parameter to setup an automatic scheduled synchronization.
For example:
Search for documents
Search for documents by vector
Search for documents with metadata filter
Add text with embeddings
You can also bring your own embeddings with theadd_texts_with_embeddings
method.
This is particularly useful for multimodal data which might require custom preprocessing before the embedding generation.
Batch serving with BigQuery
You can simply use the method.to_bq_vector_store()
to get a BigQueryVectorStore object, which offers optimized performances for batch use cases. All mandatory parameters will be automatically transferred from the existing class. See the class definition for all the parameters you can use.
Moving back to BigQueryVectorStore is equivalently easy with the .to_vertex_fs_vector_store()
method.