GreenNode is a global AI solutions provider and a NVIDIA Preferred Partner, delivering full-stack AI capabilities—from infrastructure to application—for enterprises across the US, MENA, and APAC regions. Operating on world-class infrastructure (LEED Gold, TIA‑942, Uptime Tier III), GreenNode empowers enterprises, startups, and researchers with a comprehensive suite of AI servicesThis page will help you get started with GreenNode Serverless AI chat models. For detailed documentation of all ChatGreenNode features and configurations head to the API reference. GreenNode AI offers an API to query 20+ leading open-source models
Overview
Integration details
| Class | Package | Local | Serializable | JS support | Downloads | Version |
|---|---|---|---|---|---|---|
| ChatGreenNode | langchain-greennode | ❌ | beta | ❌ |
Model features
| Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
|---|---|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
Setup
To access GreenNode models you’ll need to create a GreenNode account, get an API key, and install thelangchain-greennode integration package.
Credentials
Head to this page to sign up to GreenNode AI Platform and generate an API key. Once you’ve done this, set the GREENNODE_API_KEY environment variable:Installation
The LangChain GreenNode integration lives in thelangchain-greennode package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
Streaming
You can also stream the response using thestream method:
Chat Messages
You can use different message types to structure your conversations with the model:Chaining
You can useChatGreenNode in LangChain chains and agents:
Available Models
The full list of supported models can be found in the GreenNode Serverless AI Models.API reference
For more details about the GreenNode Serverless AI API, visit the GreenNode Serverless AI Documentation.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.