MLXPipeline
class.
The MLX Community hosts over 150 models, all open source and publicly available on Hugging Face Model Hub a online platform where people can easily collaborate and build ML together.
These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the MlXPipeline class. For more information on mlx, see the examples repo notebook.
To use, you should have the mlx-lm
python package installed, as well as transformers. You can also install huggingface_hub
.
Model Loading
Models can be loaded by specifying the model parameters using thefrom_model_id
method.
transformers
pipeline directly