HuggingFacePipeline
class.
The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class.
To use, you should have the transformers
python package installed, as well as pytorch. You can also install xformer
for a more memory-efficient attention implementation.
Model Loading
Models can be loaded by specifying the model parameters using thefrom_model_id
method.
transformers
pipeline directly
Create Chain
With the model loaded into memory, you can compose it with a prompt to form a chain.skip_prompt=True
with LLM.
GPU Inference
When running on a machine with GPU, you can specify thedevice=n
parameter to put the model on the specified device.
Defaults to -1
for CPU inference.
If you have multiple-GPUs and/or the model is too large for a single GPU, you can specify device_map="auto"
, which requires and uses the Accelerate library to automatically determine how to load the model weights.
Note: both device
and device_map
should not be specified together and can lead to unexpected behavior.
Batch GPU Inference
If running on a device with GPU, you can also run inference on the GPU in batch mode.Inference with OpenVINO backend
To deploy a model with OpenVINO, you can specify thebackend="openvino"
parameter to trigger OpenVINO as backend inference framework.
If you have an Intel GPU, you can specify model_kwargs={"device": "GPU"}
to run inference on it.
Inference with local OpenVINO model
It is possible to export your model to the OpenVINO IR format with the CLI, and load the model from local folder.--weight-format
:
ov_config
as follows: