Overview
Integration details
| Class | Package | Local | Serializable | PY support | Downloads | Version |
|---|---|---|---|---|---|---|
| ChatMistralAI | @langchain/mistralai | ❌ | ❌ | ✅ |
Model features
See the links in the table headers below for guides on how to use specific features.| Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Token usage | Logprobs |
|---|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ |
Setup
To access Mistral AI models you’ll need to create a Mistral AI account, get an API key, and install the@langchain/mistralai integration package.
Credentials
Head here to sign up to Mistral AI and generate an API key. Once you’ve done this set theMISTRAL_API_KEY environment variable:
Installation
The LangChain ChatMistralAI integration lives in the@langchain/mistralai package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
When sending chat messages to mistral, there are a few requirements to follow:- The first message can not be an assistant (ai) message.
- Messages must alternate between user and assistant (ai) messages.
- Messages can not end with an assistant (ai) or system message.
Tool calling
Mistral’s API supports tool calling for a subset of their models. You can see which models support tool calling on this page. The examples below demonstrates how to use it:Hooks
Mistral AI supports custom hooks for three events: beforeRequest, requestError, and reponse. Examples of the function signature for each hook type can be seen below:API reference
For detailed documentation of all ChatMistralAI features and configurations head to the API reference.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.