site/docs/providers/github.md
GitHub Models provides access to industry-leading AI models from OpenAI, Anthropic, Google, and xAI through a unified API interface.
The GitHub Models provider is compatible with all the options provided by the OpenAI provider as it uses the OpenAI-compatible API format.
Set your GitHub personal access token with the GITHUB_TOKEN environment variable, or pass it directly in the configuration:
export GITHUB_TOKEN=your_github_token
GitHub Models provides access to industry-leading AI models from various providers. Models are regularly updated and added frequently.
Language Models
Specialized Models
For the most up-to-date list of available models, visit the GitHub Models marketplace.
providers:
- github:openai/gpt-5
providers:
- id: github:anthropic/claude-4-opus # Uses GITHUB_TOKEN env var
config:
temperature: 0.7
max_tokens: 4096
# apiKey: "{{ env.GITHUB_TOKEN }}" # optional, auto-detected
providers:
- id: github-fast
provider: github:openai/gpt-5-nano
config:
temperature: 0.5
- id: github-balanced
provider: github:openai/gpt-5-mini
config:
temperature: 0.6
- id: github-smart
provider: github:openai/gpt-5
config:
temperature: 0.7
- id: github-multimodal
provider: github:meta/llama-4-maverick
config:
temperature: 0.8
- id: github-reasoning
provider: github:xai/grok-4
config:
temperature: 0.7
Choose models based on your specific needs:
Visit the GitHub Models marketplace to compare model capabilities and pricing.
Personal Access Token (PAT)
models:read scope for fine-grained PATsGITHUB_TOKEN environment variableGitHub Actions
GITHUB_TOKEN in workflowsBring Your Own Key (BYOK)
Each model has specific rate limits and pricing. Check the GitHub Models documentation for current details.
https://models.github.ai/inferenceThe GitHub Models API supports:
Models are accessed using the format github:[model-id] where model-id follows the naming convention used in the GitHub Models marketplace:
[vendor]/[model-name]azureml/[model-name]azureml-[vendor]/[model-name]Examples:
github:openai/gpt-5github:openai/gpt-5-minigithub:openai/gpt-5-nanogithub:anthropic/claude-4-opusgithub:anthropic/claude-4-sonnetgithub:google/gemini-2.5-progithub:xai/grok-4github:xai/grok-3github:meta/llama-4-behemothgithub:meta/llama-4-scoutgithub:meta/llama-4-maverickgithub:deepseek/deepseek-r1github:azureml/Phi-4github:azureml-mistral/Codestral-2501import promptfoo from 'promptfoo';
// Basic usage
const results = await promptfoo.evaluate({
providers: ['github:openai/gpt-5', 'github:anthropic/claude-4-opus'],
prompts: ['Write a function to {{task}}'],
tests: [
{
vars: { task: 'reverse a string' },
assert: [
{
type: 'contains',
value: 'function',
},
],
},
],
});
// Using specialized models
const specializedModels = await promptfoo.evaluate({
providers: [
'github:azureml-mistral/Codestral-2501', // Code generation
'github:deepseek/deepseek-r1', // Advanced reasoning
'github:xai/grok-4', // Powerful reasoning and analysis
'github:meta/llama-4-scout', // Extended context (10M tokens)
],
prompts: ['Implement {{algorithm}} with optimal time complexity'],
tests: [
{
vars: { algorithm: 'quicksort' },
assert: [
{
type: 'javascript',
value: 'output.includes("function") && output.includes("pivot")',
},
],
},
],
});
For more information on specific models and their capabilities, refer to the GitHub Models marketplace.