Want to run Mistral’s models locally? Check out our Ollama integration.
You are currently on a page documenting the use of Mistral models as text completion models. Many popular models available on Mistral are chat completion models.You may be looking for this page instead.
MistralAI features and configuration options, please refer to the API reference.
Overview
Integration details
| Class | Package | Local | Serializable | PY support | Downloads | Version |
|---|---|---|---|---|---|---|
| MistralAI | @langchain/mistralai | ❌ | ✅ | ❌ |
Setup
To access MistralAI models you’ll need to create a MistralAI account, get an API key, and install the@langchain/mistralai integration package.
Credentials
Head to console.mistral.ai to sign up to MistralAI and generate an API key. Once you’ve done this set theMISTRAL_API_KEY environment variable:
Installation
The LangChain MistralAI integration lives in the@langchain/mistralai package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
Hooks
Mistral AI supports custom hooks for three events: beforeRequest, requestError, and reponse. Examples of the function signature for each hook type can be seen below:API reference
For detailed documentation of all MistralAI features and configurations head to the API reference.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.