WatsonxLLM is a wrapper for IBM watsonx.ai foundation models.This example shows how to communicate with
watsonx.ai models using LangChain.
Overview
Integration details
| Class | Package | Local | Serializable | JS support | Downloads | Version |
|---|---|---|---|---|---|---|
| WatsonxLLM | langchain-ibm | ❌ | ❌ | ✅ |
Setup
To access IBM watsonx.ai models you’ll need to create an IBM watsonx.ai account, get an API key, and install thelangchain-ibm integration package.
Credentials
The cell below defines the credentials required to work with watsonx Foundation Model inferencing. Action: Provide the IBM Cloud user API key. For details, see Managing user API keys.Installation
The LangChain IBM integration lives in thelangchain-ibm package:
Instantiation
You might need to adjust modelparameters for different models or tasks. For details, refer to documentation.
WatsonxLLM class with previously set parameters.
Note:
- To provide context for the API call, you must add
project_idorspace_id. For more information see documentation. - Depending on the region of your provisioned service instance, use one of the urls described here.
project_id and Dallas url.
You need to specify model_id that will be used for inferencing. All available models you can find in documentation.
model_id, you can also pass the deployment_id of the previously tuned model. The entire model tuning workflow is described in Working with TuneExperiment and PromptTuner.
APIClient object into the WatsonxLLM class.
ModelInference object into the WatsonxLLM class.
Invocation
To obtain completions, you can call the model directly using a string prompt.Streaming the Model output
You can stream the model output.Chaining
CreatePromptTemplate objects which will be responsible for creating a random question.
API reference
For detailed documentation of allWatsonxLLM features and configurations head to the API reference.
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.