langchain-runpod integration package to connect LangChain applications to models hosted on RunPod Serverless.
The integration offers interfaces for both standard Language Models (LLMs) and Chat Models.
Intstallation
Install the dedicated partner package:Setup
1. Deploy an Endpoint on RunPod
- Navigate to your RunPod Serverless Console.
- Create a “New Endpoint”, selecting an appropriate GPU and template (e.g., vLLM, TGI, text-generation-webui) compatible with your model and the expected input/output format (see component guides or the package README).
- Configure settings and deploy.
- Crucially, copy the Endpoint ID after deployment.
2. Set API Credentials
The integration needs your RunPod API Key and the Endpoint ID. Set them as environment variables for secure access:RUNPOD_CHAT_ENDPOINT_ID or pass the ID directly during initialization.
Components
This package provides two main components:1. LLM
For interacting with standard text completion models. See the RunPod LLM Integration Guide for detailed usage2. Chat Model
For interacting with conversational models. See the RunPod Chat Model Integration Guide for detailed usage and feature support.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.