Overview
Integration details
| Class | Package | Local | Serializable | JS support | Downloads | Version |
|---|---|---|---|---|---|---|
| AzureAIChatCompletionsModel | langchain-azure-ai | ❌ | ✅ | ✅ |
Model features
| Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
|---|---|---|---|---|---|---|---|---|---|
| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
Setup
To access AzureAIChatCompletionsModel models, you’ll need to create an Azure account, get an API key, and install thelangchain-azure-ai integration package.
Credentials
Head to the Azure docs to see how to create your deployment and generate an API key. Once your model is deployed, you click the ‘get endpoint’ button in AI Foundry. This will show you your endpoint and api key. Once you’ve done this, set the AZURE_AI_CREDENTIAL and AZURE_AI_ENDPOINT environment variables:Installation
The LangChain AzureAIChatCompletionsModel integration lives in thelangchain-azure-ai package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
API reference
For detailed documentation of all AzureAIChatCompletionsModel features and configurations, head to the API reference: python.langchain.com/api_reference/azure_ai/chat_models/langchain_azure_ai.chat_models.AzureAIChatCompletionsModel.htmlConnect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.