docs/configuration.md
Taskmaster uses two primary methods for configuration:
.taskmaster/config.json File (Recommended - New Structure)
.taskmaster/ directory when you run the task-master models --setup interactive setup or initialize a new project with task-master init..taskmasterconfig in the root will continue to work, but should be migrated to the new structure using task-master migrate.task-master models --setup command (or models MCP tool) to interactively create and manage this file. You can also set specific models directly using task-master models --set-<role>=<model_id>, adding --ollama or --openrouter flags for custom models. Manual editing is possible but not recommended unless you understand the structure.{
"models": {
"main": {
"provider": "anthropic",
"modelId": "claude-3-7-sonnet-20250219",
"maxTokens": 64000,
"temperature": 0.2,
"baseURL": "https://api.anthropic.com/v1"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1,
"baseURL": "https://api.perplexity.ai/v1"
},
"fallback": {
"provider": "anthropic",
"modelId": "claude-3-5-sonnet",
"maxTokens": 64000,
"temperature": 0.2
}
},
"global": {
"logLevel": "info",
"debug": false,
"defaultNumTasks": 10,
"defaultSubtasks": 5,
"defaultPriority": "medium",
"defaultTag": "master",
"projectName": "Your Project Name",
"ollamaBaseURL": "http://localhost:11434/api",
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
"vertexProjectId": "your-gcp-project-id",
"vertexLocation": "us-central1",
"responseLanguage": "English"
}
}
For MCP-specific setup and troubleshooting, see Provider-Specific Configuration.
Legacy .taskmasterconfig File (Backward Compatibility)
task-master migrate to move this to .taskmaster/config.json.The TASK_MASTER_TOOLS environment variable controls which tools are loaded by the Task Master MCP server. This allows you to optimize token usage based on your workflow needs.
Note Prefer setting
TASK_MASTER_TOOLSin your MCP client'senvblock (e.g.,.cursor/mcp.json) or in CI/deployment env. The.envfile is reserved for API keys/endpoints; avoid persisting non-secret settings there.
core (default, or lean): Loads 7 essential tools (~5,000 tokens, 70% reduction)
get_tasks, next_task, get_task, set_task_status, update_subtask, parse_prd, expand_taskstandard: Loads 15 commonly used tools (~10,000 tokens, 50% reduction)
all: Loads all 36 available tools (~21,000 tokens)
Custom list: Comma-separated list of specific tool names
"get_tasks,next_task,set_task_status"In MCP configuration files (.cursor/mcp.json, .vscode/mcp.json, etc.) - Recommended:
{
"mcpServers": {
"task-master-ai": {
"env": {
"TASK_MASTER_TOOLS": "standard", // Set tool loading mode
// API keys can still use .env for security
}
}
}
}
Via Claude Code CLI:
claude mcp add task-master-ai --scope user \
--env TASK_MASTER_TOOLS="core" \
-- npx -y task-master-ai@latest
In CI/deployment environment variables:
export TASK_MASTER_TOOLS="standard"
node mcp-server/server.js
TASK_MASTER_TOOLS is unset or empty, the system defaults to "core""all""CORE", "core", and "Core" are treated identically).env file or MCP env block - For API Keys Only).env file in your project root.env section of your .cursor/mcp.json file.ANTHROPIC_API_KEY: Your Anthropic API key.PERPLEXITY_API_KEY: Your Perplexity API key.OPENAI_API_KEY: Your OpenAI API key.GOOGLE_API_KEY: Your Google API key (also used for Vertex AI provider).MISTRAL_API_KEY: Your Mistral API key.AZURE_OPENAI_API_KEY: Your Azure OpenAI API key (also requires AZURE_OPENAI_ENDPOINT).OPENROUTER_API_KEY: Your OpenRouter API key.XAI_API_KEY: Your X-AI API key.baseURL in .taskmasterconfig: You can add a baseURL property to any model role (main, research, fallback) to override the default API endpoint for that provider. If omitted, the provider's standard endpoint is used.<PROVIDER>_BASE_URL): For greater flexibility, especially with third-party services, you can set an environment variable like OPENAI_BASE_URL or MISTRAL_BASE_URL. This will override any baseURL set in the configuration file for that provider. This is the recommended way to connect to OpenAI-compatible APIs.AZURE_OPENAI_ENDPOINT: Required if using Azure OpenAI key (can also be set as baseURL for the Azure model role).OLLAMA_BASE_URL: Override the default Ollama API URL (Default: http://localhost:11434/api).VERTEX_PROJECT_ID: Your Google Cloud project ID for Vertex AI. Required when using the 'vertex' provider.VERTEX_LOCATION: Google Cloud region for Vertex AI (e.g., 'us-central1'). Default is 'us-central1'.GOOGLE_APPLICATION_CREDENTIALS: Path to service account credentials JSON file for Google Cloud auth (alternative to API key for Vertex AI).Important: Settings like model ID selections (main, research, fallback), maxTokens, temperature, logLevel, defaultSubtasks, defaultPriority, and projectName are managed in .taskmaster/config.json (or .taskmasterconfig for unmigrated projects), not environment variables.
Taskmaster includes a tagged task lists system for multi-context task management.
"global": {
"defaultTag": "master"
}
defaultTag (string): Default tag context for new operations (default: "master")Task Master provides manual git integration through the --from-branch option:
task-master add-tag --from-branch to create a tag based on your current git branch nameTaskmaster uses .taskmaster/state.json to track tagged system runtime information:
{
"currentTag": "master",
"lastSwitched": "2025-06-11T20:26:12.598Z",
"migrationNoticeShown": true
}
currentTag: Currently active tag contextlastSwitched: Timestamp of last tag switchmigrationNoticeShown: Whether migration notice has been displayedThis file is automatically created during tagged system migration and should not be manually edited.
.env File (for API Keys)# Required API keys for providers configured in .taskmaster/config.json
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
PERPLEXITY_API_KEY=pplx-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# GOOGLE_API_KEY=AIzaSy...
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
# etc.
# Optional Endpoint Overrides
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
# OPENAI_BASE_URL=https://api.third-party.com/v1
#
# Azure OpenAI Configuration
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ or https://your-endpoint-name.cognitiveservices.azure.com/openai/deployments
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api
# Google Vertex AI Configuration (Required if using 'vertex' provider)
# VERTEX_PROJECT_ID=your-gcp-project-id
task-master models --setup in your project root to create or repair the file..taskmaster/config.json. For legacy projects, you may want to use task-master migrate to move to the new structure..env file (for CLI) or .cursor/mcp.json (for MCP) and are valid for the providers selected in your config file.task-master init doesn't respond:Try running it with Node directly:
node node_modules/claude-task-master/scripts/init.js
Or clone the repository and run:
git clone https://github.com/eyaltoledano/claude-task-master.git
cd claude-task-master
node scripts/init.js
Prerequisites:
Configuration:
{
"models": {
"main": {
"provider": "mcp",
"modelId": "mcp-sampling"
},
"research": {
"provider": "mcp",
"modelId": "mcp-sampling"
}
}
}
Available Model IDs:
mcp-sampling - General text generation using MCP client sampling (supports all roles)claude-3-5-sonnet-20241022 - High-performance model for general tasks (supports all roles)claude-3-opus-20240229 - Enhanced reasoning model for complex tasks (supports all roles)Features:
Usage Requirements:
clientCapabilities.sampling capabilityBest Practices:
mcp for main/research roles when in MCP environmentsSetup Commands:
# Set MCP provider for main role
task-master models set-main --provider mcp --model claude-3-5-sonnet-20241022
# Set MCP provider for research role
task-master models set-research --provider mcp --model claude-3-opus-20240229
# Verify configuration
task-master models list
Troubleshooting:
Long-running AI operations in taskmaster-ai can exceed the default 60-second MCP timeout. Operations like parse_prd, expand_task, research, and analyze_project_complexity may take 2-5 minutes to complete.
Add a timeout parameter to your MCP configuration to extend the timeout limit. The timeout configuration works identically across MCP clients including Cursor, Windsurf, and RooCode:
{
"mcpServers": {
"task-master-ai": {
"command": "npx",
"args": ["-y", "--package=task-master-ai", "task-master-ai"],
"timeout": 300,
"env": {
"ANTHROPIC_API_KEY": "your-anthropic-api-key"
}
}
}
}
Configuration Details:
timeout: 300 - Sets timeout to 300 seconds (5 minutes)When adding taskmaster rules for supported editors, the timeout configuration is automatically included:
# Automatically includes timeout configuration
task-master rules add cursor
task-master rules add roo
task-master rules add windsurf
task-master rules add vscode
If you're still experiencing timeout errors:
timeout: 300 is present in your MCP configtimeout: 600 (10 minutes)Expected behavior:
MCP request timed out after 60000msGoogle Vertex AI is Google Cloud's enterprise AI platform and requires specific configuration:
Prerequisites:
Authentication Options:
GOOGLE_API_KEY environment variableGOOGLE_APPLICATION_CREDENTIALS to point to your service account JSON fileRequired Configuration:
VERTEX_PROJECT_ID to your Google Cloud project IDVERTEX_LOCATION to your preferred Google Cloud region (default: us-central1)Example Setup:
# In .env file
GOOGLE_API_KEY=AIzaSyXXXXXXXXXXXXXXXXXXXXXXXXX
VERTEX_PROJECT_ID=my-gcp-project-123
VERTEX_LOCATION=us-central1
Or using service account:
# In .env file
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
VERTEX_PROJECT_ID=my-gcp-project-123
VERTEX_LOCATION=us-central1
In .taskmaster/config.json:
"global": {
"vertexProjectId": "my-gcp-project-123",
"vertexLocation": "us-central1"
}
Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure cloud platform and requires specific configuration:
Prerequisites:
Authentication:
AZURE_OPENAI_API_KEY environment variable with your Azure OpenAI API keyConfiguration Options:
Option 1: Using Global Azure Base URL (affects all Azure models)
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "azure",
"modelId": "gpt-4o",
"maxTokens": 16000,
"temperature": 0.7
},
"fallback": {
"provider": "azure",
"modelId": "gpt-4o-mini",
"maxTokens": 10000,
"temperature": 0.7
}
},
"global": {
"azureBaseURL": "https://your-resource-name.azure.com/openai/deployments"
}
}
Option 2: Using Per-Model Base URLs (recommended for flexibility)
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "azure",
"modelId": "gpt-4o",
"maxTokens": 16000,
"temperature": 0.7,
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "azure",
"modelId": "gpt-4o-mini",
"maxTokens": 10000,
"temperature": 0.7,
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
}
}
}
Environment Variables:
# In .env file
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
# Optional: Override endpoint for all Azure models
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
Important Notes:
modelId in your configuration should match the deployment name you created in Azure OpenAI Studio, not the underlying model namebaseURL settings override the global azureBaseURL settingbaseURL, use the full path including /openai/deploymentsTroubleshooting:
"Resource not found" errors:
baseURL includes the full path: https://your-resource-name.openai.azure.com/openai/deploymentsmodelId exactly matches what's configured in Azure OpenAI StudioAuthentication errors:
AZURE_OPENAI_API_KEY is correct and has not expiredModel availability errors:
maxTokens maintain appropriate Tokens per Minute Rate Limit (TPM) in your deployment.The Codex CLI provider integrates Task Master with OpenAI's Codex CLI, allowing you to use ChatGPT subscription models via OAuth authentication.
Prerequisites:
Installation:
npm install -g @openai/codex
Authentication (OAuth - Primary Method):
codex login
This will open a browser window for OAuth authentication with your ChatGPT account. Once authenticated, Task Master will automatically use these credentials.
Optional API Key Method: While OAuth is the primary and recommended authentication method, you can optionally set an OpenAI API key:
# In .env file
OPENAI_API_KEY=sk-your-openai-api-key-here
Note: The API key will only be injected if explicitly provided. OAuth is always preferred.
Configuration:
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "codex-cli",
"modelId": "gpt-5-codex",
"maxTokens": 128000,
"temperature": 0.2
},
"fallback": {
"provider": "codex-cli",
"modelId": "gpt-5",
"maxTokens": 128000,
"temperature": 0.2
}
},
"codexCli": {
"allowNpx": true,
"skipGitRepoCheck": true,
"approvalMode": "on-failure",
"sandboxMode": "workspace-write"
}
}
Available Models:
gpt-5 - Latest GPT-5 model (272K max input, 128K max output)gpt-5-codex - GPT-5 optimized for agentic software engineering (272K max input, 128K max output)Codex CLI Settings (codexCli section):
The codexCli section in your configuration file supports the following options:
allowNpx (boolean, default: false): Allow fallback to npx @openai/codex if CLI not found on PATHskipGitRepoCheck (boolean, default: false): Skip git repository safety check (recommended for CI/non-repo usage)approvalMode (string): Control command execution approval
"untrusted": Require approval for all commands"on-failure": Only require approval after a command fails (default)"on-request": Approve only when explicitly requested"never": Never require approval (not recommended)sandboxMode (string): Control filesystem access
"read-only": Read-only access"workspace-write": Allow writes to workspace (default)"danger-full-access": Full filesystem access (use with caution)codexPath (string, optional): Custom path to codex CLI executablecwd (string, optional): Working directory for Codex CLI executionfullAuto (boolean, optional): Fully automatic mode (equivalent to --full-auto flag)dangerouslyBypassApprovalsAndSandbox (boolean, optional): Bypass all safety checks (dangerous!)color (string, optional): Color handling - "always", "never", or "auto"outputLastMessageFile (string, optional): Write last agent message to specified fileverbose (boolean, optional): Enable verbose loggingenv (object, optional): Additional environment variables for Codex CLICommand-Specific Settings (optional): You can override settings for specific Task Master commands:
{
"codexCli": {
"allowNpx": true,
"approvalMode": "on-failure",
"commandSpecific": {
"parse-prd": {
"approvalMode": "never",
"verbose": true
},
"expand": {
"sandboxMode": "read-only"
}
}
}
}
Codebase Features:
The Codex CLI provider is codebase-capable, meaning it can analyze and interact with your project files. Codebase analysis features are automatically enabled when using codex-cli as your provider and enableCodebaseAnalysis is set to true in your global configuration (default).
Setup Commands:
# Set Codex CLI for main role
task-master models --set-main gpt-5-codex --codex-cli
# Set Codex CLI for fallback role
task-master models --set-fallback gpt-5 --codex-cli
# Verify configuration
task-master models
Troubleshooting:
"codex: command not found" error:
npm install -g @openai/codexcodex --versionallowNpx: true in your codexCli configuration"Not logged in" errors:
codex login to authenticate with your ChatGPT accountcodex (opens interactive CLI)"Old version" warnings:
codex --versionnpm install -g @openai/codex@latest"Model not available" errors:
gpt-5 and gpt-5-codex are available via OAuth subscriptionopenai provider with an API keyAPI key not being used:
OPENAI_API_KEY is set in your .env fileImportant Notes:
gpt-5 and gpt-5-codex)