Overview
Integration details
| Class | Package | Local | Serializable | JS support | Downloads | Version |
|---|---|---|---|---|---|---|
| ChatDeepSeek | langchain-deepseek | ❌ | beta | ✅ |
Model features
| Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
|---|---|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
DeepSeek-R1, specified via
model="deepseek-reasoner", does not support tool calling or structured output. Those features are supported by DeepSeek-V3 (specified via model="deepseek-chat").Setup
To access DeepSeek models you’ll need to create a/an DeepSeek account, get an API key, and install thelangchain-deepseek integration package.
Credentials
Head to DeepSeek’s API Key page to sign up to DeepSeek and generate an API key. Once you’ve done this set theDEEPSEEK_API_KEY environment variable:
Installation
The LangChain DeepSeek integration lives in thelangchain-deepseek package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
API reference
For detailed documentation of all ChatDeepSeek features and configurations head to the API Reference.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.