Azure ML Online Endpoint.
Set up
You must deploy a model on Azure ML or to Azure AI Foundry (formerly Azure AI Studio and obtain the following parameters:endpoint_url: The REST endpoint url provided by the endpoint.endpoint_api_type: Useendpoint_type='dedicated'when deploying models to Dedicated endpoints (hosted managed infrastructure). Useendpoint_type='serverless'when deploying models using the Pay-as-you-go offering (model as a service).endpoint_api_key: The API key provided by the endpoint.deployment_name: (Optional) The deployment name of the model using the endpoint.
Content Formatter
Thecontent_formatter parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a ContentFormatterBase class is provided to allow users to transform data to their liking. The following content formatters are provided:
GPT2ContentFormatter: Formats request and response data for GPT2DollyContentFormatter: Formats request and response data for the Dolly-v2HFContentFormatter: Formats request and response data for text-generation Hugging Face modelsCustomOpenAIContentFormatter: Formats request and response data for models like LLaMa2 that follow OpenAI API compatible scheme.
OSSContentFormatter is being deprecated and replaced with GPT2ContentFormatter. The logic is the same but GPT2ContentFormatter is a more suitable name. You can still continue to use OSSContentFormatter as the changes are backwards compatible.
Examples
Example: LlaMa 2 completions with real-time endpoints
Example: Chat completions with pay-as-you-go deployments (model as a service)
Example: Custom content formatter
Below is an example using a summarization model from Hugging Face.Example: Dolly with LLMChain
Serializing an LLM
You can also save and load LLM configurationsConnect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.