Baseten is a provider of all the infrastructure you need to deploy and serve ML models performantly, reliably, and scalably.
As a model inference platform,Basetenis aProviderin the LangChain ecosystem. TheBasetenintegration currently implementsChat ModelsandEmbeddingscomponents.
Basetenlets you access both open source models like Kimi K2 or GPT OSS on model APIs by specifying amodelslug and run proprietary or fine-tuned models on dedicated GPUs through dedicated deployments by specifying amodel_url.
Installation and Setup
You’ll need two things to use Baseten models with LangChain:- A Baseten account
- An API key
BASETEN_API_KEY.
Chat Models (Model APIs and Dedicated Deployments)
See a usage example.Embeddings (Dedicated Deployments Only)
See a usage example.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.