GLM-4 is a multi-lingual large language model aligned with human intent, featuring capabilities in Q&A, multi-turn dialogue, and code generation. The overall performance of the new generation base model GLM-4 has been significantly improved compared to the previous generation, supporting longer contexts; Stronger multimodality; Support faster inference speed, more concurrency, greatly reducing inference costs; Meanwhile, GLM-4 enhances the capabilities of intelligent agents.
Getting started
Installation
First, ensure the zhipuai package is installed in your Python environment. Run the following command:Importing the Required Modules
After installation, import the necessary modules to your Python script:Setting Up Your API Key
Sign in to ZHIPU AI for an API Key to access our models.Initialize the ZHIPU AI Chat Model
Here’s how to initialize the chat model:Basic Usage
Invoke the model with system and human messages like this:Advanced Features
Streaming Support
For continuous interaction, use the streaming feature:Asynchronous Calls
For non-blocking calls, use the asynchronous approach:Using With Functions Call
GLM-4 model can be used with the function call as well, use the following code to run a simple LangChain json_chat_agent.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.