- Achieve unprecedented speed for AI inference workloads
- Build commercially with high throughput
- Effortlessly scale your AI workloads with our seamless clustering technology
Overview
Integration details
| Class | Package | Local | Serializable | PY support | Downloads | Version |
|---|---|---|---|---|---|---|
| ChatCerebras | @langchain/cerebras | ❌ | ❌ | ✅ |
Model features
See the links in the table headers below for guides on how to use specific features.| Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Token usage | Logprobs |
|---|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ |
Setup
To access ChatCerebras models you’ll need to create a Cerebras account, get an API key, and install the@langchain/cerebras integration package.
Credentials
Get an API Key from cloud.cerebras.ai and add it to your environment variables:Installation
The LangChain ChatCerebras integration lives in the@langchain/cerebras package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
Json invocation
API reference
For detailed documentation of all ChatCerebras features and configurations head to the API reference.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.