fern/03-reference/baml/clients/providers/azure.mdx
For azure-openai, we provide a client that can be used to interact with the OpenAI API hosted on Azure using the /chat/completions endpoint.
Example:
client<llm> MyClient {
provider azure-openai
options {
resource_name "my-resource-name"
deployment_id "my-deployment-id"
// Alternatively, you can use the base_url field
// base_url "https://my-resource-name.openai.azure.com/openai/deployments/my-deployment-id"
api_version "2024-02-01"
api_key env.AZURE_OPENAI_API_KEY
}
}
The options are passed through directly to the API, barring a few. Here's a shorthand of the options:
optionsThese unique parameters (aka options) modify the API request sent to the provider.
You can use this to modify the azure api key, base url, and api version for example.
<ParamField path="api_key" type="string"
Will be injected via the header API-KEY. Default: env.AZURE_OPENAI_API_KEY
API-KEY: $api_key
</ParamField>
<ParamField path="base_url" type="string"
The base URL for the API. Default: https://${resource_name}.openai.azure.com/openai/deployments/${deployment_id}
May be used instead of resource_name and deployment_id.
</ParamField>
<ParamField path="deployment_id" type="string" required
See the base_url field.
</ParamField>
<ParamField path="resource_name" type="string" required
See the base_url field.
</ParamField>
Example:
client<llm> MyClient {
provider azure-openai
options {
resource_name "my-resource-name"
deployment_id "my-deployment-id"
api_version "2024-02-01"
api_key env.AZURE_OPENAI_API_KEY
headers {
"X-My-Header" "my-value"
}
}
}
These are other options that are passed through to the provider, without modification by BAML. For example if the request has a temperature field, you can define it in the client here so every call has that set.
Consult the specific provider's documentation for more information.
<Warning> For reasoning models (like `o1` or `o1-mini`), you must use `max_completion_tokens` instead of `max_tokens`. Please set `max_tokens` to `null` in order to get this to work.See the OpenAI API documentation and OpenAI Reasoning Docs for more details about token handling.
Example:
client<llm> AzureO1 {
provider azure-openai
options {
deployment_id "o1-mini"
max_tokens null
}
}
<ParamField path="messages" type="DO NOT USE"
BAML will auto construct this field for you from the prompt </ParamField> <ParamField path="stream" type="DO NOT USE"
BAML will auto construct this field for you based on how you call the client in your code </ParamField>
For all other options, see the official Azure API documentation.