Hugging Face Inference Providers
We can also access embedding models via the Inference Providers, which let’s us use open source models on scalable serverless infrastructure. First, we need to get a read-only API key from Hugging Face.HuggingFaceInferenceAPIEmbeddings class to run open source embedding models via Inference Providers.
Hugging Face Hub
We can also generate embeddings locally via the Hugging Face Hub package, which requires us to installhuggingface_hub
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.