backend/docs/CONFIGURATION.md
This guide explains how to configure DeerFlow for your environment.
config.example.yaml contains a config_version field that tracks schema changes. When the example version is higher than your local config.yaml, the application emits a startup warning:
WARNING - Your config.yaml (version 0) is outdated — the latest version is 1.
Run `make config-upgrade` to merge new fields into your config.
config_version in your config is treated as version 0.make config-upgrade to auto-merge missing fields (your existing values are preserved, a .bak backup is created).config_version in config.example.yaml.Configure the LLM models available to the agent:
models:
- name: gpt-4 # Internal identifier
display_name: GPT-4 # Human-readable name
use: langchain_openai:ChatOpenAI # LangChain class path
model: gpt-4 # Model identifier for API
api_key: $OPENAI_API_KEY # API key (use env var)
max_tokens: 4096 # Max tokens per request
temperature: 0.7 # Sampling temperature
Supported Providers:
langchain_openai:ChatOpenAI)langchain_anthropic:ChatAnthropic)langchain_deepseek:ChatDeepSeek)For OpenAI-compatible gateways (for example Novita or OpenRouter), keep using langchain_openai:ChatOpenAI and set base_url:
models:
- name: novita-deepseek-v3.2
display_name: Novita DeepSeek V3.2
use: langchain_openai:ChatOpenAI
model: deepseek/deepseek-v3.2
api_key: $NOVITA_API_KEY
base_url: https://api.novita.ai/openai
supports_thinking: true
when_thinking_enabled:
extra_body:
thinking:
type: enabled
- name: minimax-m2.5
display_name: MiniMax M2.5
use: langchain_openai:ChatOpenAI
model: MiniMax-M2.5
api_key: $MINIMAX_API_KEY
base_url: https://api.minimax.io/v1
max_tokens: 4096
temperature: 1.0 # MiniMax requires temperature in (0.0, 1.0]
supports_vision: true
- name: minimax-m2.5-highspeed
display_name: MiniMax M2.5 Highspeed
use: langchain_openai:ChatOpenAI
model: MiniMax-M2.5-highspeed
api_key: $MINIMAX_API_KEY
base_url: https://api.minimax.io/v1
max_tokens: 4096
temperature: 1.0 # MiniMax requires temperature in (0.0, 1.0]
supports_vision: true
- name: openrouter-gemini-2.5-flash
display_name: Gemini 2.5 Flash (OpenRouter)
use: langchain_openai:ChatOpenAI
model: google/gemini-2.5-flash-preview
api_key: $OPENAI_API_KEY
base_url: https://openrouter.ai/api/v1
If your OpenRouter key lives in a different environment variable name, point api_key at that variable explicitly (for example api_key: $OPENROUTER_API_KEY).
Thinking Models: Some models support "thinking" mode for complex reasoning:
models:
- name: deepseek-v3
supports_thinking: true
when_thinking_enabled:
extra_body:
thinking:
type: enabled
Organize tools into logical groups:
tool_groups:
- name: web # Web browsing and search
- name: file:read # Read-only file operations
- name: file:write # Write file operations
- name: bash # Shell command execution
Configure specific tools available to the agent:
tools:
- name: web_search
group: web
use: deerflow.community.tavily.tools:web_search_tool
max_results: 5
# api_key: $TAVILY_API_KEY # Optional
Built-in Tools:
web_search - Search the web (Tavily)web_fetch - Fetch web pages (Jina AI)ls - List directory contentsread_file - Read file contentswrite_file - Write file contentsstr_replace - String replacement in filesbash - Execute bash commandsDeerFlow supports multiple sandbox execution modes. Configure your preferred mode in config.yaml:
Local Execution (runs sandbox code directly on the host machine):
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider # Local execution
Docker Execution (runs sandbox code in isolated Docker containers):
sandbox:
use: deerflow.community.aio_sandbox:AioSandboxProvider # Docker-based sandbox
Docker Execution with Kubernetes (runs sandbox code in Kubernetes pods via provisioner service):
This mode runs each sandbox in an isolated Kubernetes Pod on your host machine's cluster. Requires Docker Desktop K8s, OrbStack, or similar local K8s setup.
sandbox:
use: deerflow.community.aio_sandbox:AioSandboxProvider
provisioner_url: http://provisioner:8002
When using Docker development (make docker-start), DeerFlow starts the provisioner service only if this provisioner mode is configured. In local or plain Docker sandbox modes, provisioner is skipped.
See Provisioner Setup Guide for detailed configuration, prerequisites, and troubleshooting.
Choose between local execution or Docker-based isolation:
Option 1: Local Sandbox (default, simpler setup):
sandbox:
use: deerflow.sandbox.local:LocalSandboxProvider
Option 2: Docker Sandbox (isolated, more secure):
sandbox:
use: deerflow.community.aio_sandbox:AioSandboxProvider
port: 8080
auto_start: true
container_prefix: deer-flow-sandbox
# Optional: Additional mounts
mounts:
- host_path: /path/on/host
container_path: /path/in/container
read_only: false
Configure the skills directory for specialized workflows:
skills:
# Host path (optional, default: ../skills)
path: /custom/path/to/skills
# Container mount path (default: /mnt/skills)
container_path: /mnt/skills
How Skills Work:
deer-flow/skills/{public,custom}/SKILL.md file with metadataAutomatic conversation title generation:
title:
enabled: true
max_words: 6
max_chars: 60
model_name: null # Use first model in list
DeerFlow supports environment variable substitution using the $ prefix:
models:
- api_key: $OPENAI_API_KEY # Reads from environment
Common Environment Variables:
OPENAI_API_KEY - OpenAI API keyANTHROPIC_API_KEY - Anthropic API keyDEEPSEEK_API_KEY - DeepSeek API keyNOVITA_API_KEY - Novita API key (OpenAI-compatible endpoint)TAVILY_API_KEY - Tavily search API keyDEER_FLOW_CONFIG_PATH - Custom config file pathThe configuration file should be placed in the project root directory (deer-flow/config.yaml), not in the backend directory.
DeerFlow searches for configuration in this order:
config_path argumentDEER_FLOW_CONFIG_PATH environment variableconfig.yaml in current working directory (typically backend/ when running)config.yaml in parent directory (project root: deer-flow/)config.yaml in project root - Not in backend/ directoryconfig.yaml - It's already in .gitignoreconfig.example.yaml updated - Document all new optionsconfig.yaml exists in the project root directory (deer-flow/config.yaml)DEER_FLOW_CONFIG_PATH environment variable to custom location$ prefix is used for env var referencesdeer-flow/skills/ directory existsSKILL.md filesskills.path configuration if using custom pathSee config.example.yaml for complete examples of all configuration options.