Getting started
Installation
First, Yuan2.0 provided an OpenAI compatible API, and we integrate ChatYuan2 into langchain chat model by using OpenAI client. Therefore, ensure the openai package is installed in your Python environment. Run the following command:Importing the Required Modules
After installation, import the necessary modules to your Python script:Setting Up Your API server
Setting up your OpenAI compatible API server following yuan2 openai api server. If you deployed api server locally, you can simply setyuan2_api_key="EMPTY" or anything you want.
Just make sure, the yuan2_api_base is set correctly.
Initialize the ChatYuan2 Model
Here’s how to initialize the chat model:Basic Usage
Invoke the model with system and human messages like this:Basic Usage with streaming
For continuous interaction, use the streaming feature:Advanced Features
Usage with async calls
Invoke the model with non-blocking calls, like this:Usage with prompt template
Invoke the model with non-blocking calls and used chat template like this:Usage with async calls in streaming
For non-blocking calls with streaming output, use the astream method:Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.