You are currently on a page documenting the use of Fireworks models as text completion models. Many popular Fireworks models are chat completion models.You may be looking for this page instead.
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.This example goes over how to use LangChain to interact with
Fireworks models.
Overview
Integration details
| Class | Package | Local | Serializable | JS support | Downloads | Version |
|---|---|---|---|---|---|---|
| Fireworks | langchain-fireworks | ❌ | ❌ | ✅ |
Setup
Credentials
Sign in to Fireworks AI for the an API Key to access our models, and make sure it is set as theFIREWORKS_API_KEY environment variable.
3. Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on fireworks.ai.
Installation
You need to install thelangchain-fireworks python package for the rest of the notebook to work.
Instantiation
Invocation
You can call the model directly with string prompts to get completions.Invoking with multiple prompts
Invoking with additional parameters
Chaining
You can use the LangChain Expression Language to create a simple chain with non-chat models.Streaming
You can stream the output, if you want.API reference
For detailed documentation of allFireworks LLM features and configurations head to the API reference: python.langchain.com/api_reference/fireworks/llms/langchain_fireworks.llms.Fireworks.html#langchain_fireworks.llms.Fireworks
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.