fern/snippets/client-constructor.mdx
The configuration modifies the URL request BAML runtime makes.
| Provider Name | Docs | Notes |
|---|---|---|
anthropic | Anthropic | Supports /v1/messages endpoint |
aws-bedrock | AWS Bedrock | Supports Converse and ConverseStream endpoint |
google-ai | Google AI | Supports Google AI's generateContent and streamGenerateContent endpoints |
vertex-ai | Vertex AI | Supports Vertex's generateContent and streamGenerateContent endpoints |
openai | OpenAI | Supports /chat/completions endpoint |
openai-responses | OpenAI Responses API | Supports OpenAI's most advanced /responses endpoint |
azure-openai | Azure OpenAI | Supports Azure's /chat/completions endpoint |
openai-generic | OpenAI (generic) | Any other provider that supports OpenAI's /chat/completions endpoint |
A non-exhaustive list of providers you can use with openai-generic:
| Inference Provider | Docs |
|---|---|
| Azure AI Foundry | Azure AI Foundry |
| Groq | Groq |
| Hugging Face | Hugging Face |
| Keywords AI | Keywords AI |
| Litellm | Litellm |
| LM Studio | LM Studio |
| Ollama | Ollama |
| OpenRouter | OpenRouter |
| Vercel AI Gateway | Vercel AI Gateway |
| TogetherAI | TogetherAI |
| Unify AI | Unify AI |
| vLLM | vLLM |
We also have some special providers that allow composing clients together:
| Provider Name | Docs | Notes |
|---|---|---|
fallback | Fallback | Used to chain models conditional on failures |
round-robin | Round Robin | Used to load balance |