Back to Oh My Openagent

Configuration Reference

docs/reference/configuration.md

3.17.1441.9 KB
Original Source

Configuration Reference

Complete reference for Oh My OpenCode plugin configuration. During the rename transition, the runtime recognizes both oh-my-openagent.json[c] and legacy oh-my-opencode.json[c] files.


Table of Contents


Getting Started

File Locations

User config is loaded first, then project config overrides it. In each directory, the compatibility layer recognizes both the renamed and legacy basenames.

  1. Project config: .opencode/oh-my-openagent.json[c] or .opencode/oh-my-opencode.json[c]
  2. User config (.jsonc preferred over .json):
PlatformPath candidates
macOS/Linux~/.config/opencode/oh-my-openagent.json[c], ~/.config/opencode/oh-my-opencode.json[c]
Windows%APPDATA%\opencode\oh-my-openagent.json[c], %APPDATA%\opencode\oh-my-opencode.json[c]

Rename compatibility: The published package and CLI binary remain oh-my-opencode. OpenCode plugin registration prefers oh-my-openagent, while legacy oh-my-opencode entries and config basenames still load during the transition. Config detection checks oh-my-opencode before oh-my-openagent, so if both plugin config basenames exist in the same directory, the legacy oh-my-opencode.* file currently wins. JSONC supports // line comments, /* block comments */, and trailing commas.

Enable schema autocomplete:

json
{
  "$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json"
}

Run bunx oh-my-opencode install for guided setup. Run opencode models to list available models.

Quick Start Example

Here's a practical starting configuration:

jsonc
{
  "$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json",

  "agents": {
    // Main orchestrator: Claude Opus or Kimi K2.5 work best
    "sisyphus": {
      "model": "kimi-for-coding/k2p5",
      "ultrawork": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
    },

    // Research agents: cheap fast models are fine
    "librarian": { "model": "google/gemini-3-flash" },
    "explore": { "model": "github-copilot/grok-code-fast-1" },

    // Architecture consultation: GPT-5.4 or Claude Opus
    "oracle": { "model": "openai/gpt-5.4", "variant": "high" },

    // Prometheus inherits sisyphus model; just add prompt guidance
    "prometheus": {
      "prompt_append": "Leverage deep & quick agents heavily, always in parallel.",
    },
  },

  "categories": {
    // quick - trivial tasks
    "quick": { "model": "opencode/gpt-5-nano" },

    // unspecified-low - moderate tasks
    "unspecified-low": { "model": "anthropic/claude-sonnet-4-6" },

    // unspecified-high - complex work
    "unspecified-high": { "model": "anthropic/claude-opus-4-7", "variant": "max" },

    // writing - docs/prose
    "writing": { "model": "google/gemini-3-flash" },

    // visual-engineering - Gemini dominates visual tasks
    "visual-engineering": {
      "model": "google/gemini-3.1-pro",
      "variant": "high",
    },

    // Custom category for git operations
    "git": {
      "model": "opencode/gpt-5-nano",
      "description": "All git operations",
      "prompt_append": "Focus on atomic commits, clear messages, and safe operations.",
    },
  },

  // Limit expensive providers; let cheap ones run freely
  "background_task": {
    "providerConcurrency": {
      "anthropic": 3,
      "openai": 3,
      "opencode": 10,
      "zai-coding-plan": 10,
    },
    "modelConcurrency": {
      "anthropic/claude-opus-4-7": 2,
      "opencode/gpt-5-nano": 20,
    },
  },

  "experimental": { "aggressive_truncation": true, "task_system": true },
  "tmux": { "enabled": false },
}

Core Concepts

Agents

Override built-in agent settings. Available agents: sisyphus, hephaestus, prometheus, oracle, librarian, explore, multimodal-looker, metis, momus, atlas, sisyphus-junior.

json
{
  "agents": {
    "explore": { "model": "anthropic/claude-haiku-4-5", "temperature": 0.5 },
    "multimodal-looker": { "disable": true }
  }
}

Disable agents entirely: { "disabled_agents": ["oracle", "multimodal-looker"] }

Core agents receive an injected runtime order field for deterministic Tab cycling in the UI: Sisyphus = 1, Hephaestus = 2, Prometheus = 3, Atlas = 4. This is not a user-configurable config key.

Agent Options

OptionTypeDescription
modelstringModel override (provider/model)
fallback_modelsstring|arrayFallback models on API errors. Supports strings or mixed arrays of strings and object entries with per-model settings
temperaturenumberSampling temperature
top_pnumberTop-p sampling
promptstringReplace system prompt. Supports file:// URIs
prompt_appendstringAppend to system prompt. Supports file:// URIs
toolsarrayAllowed tools list
disablebooleanDisable this agent
modestringAgent mode
colorstringUI color
permissionobjectPer-tool permissions (see below)
categorystringInherit model from category
variantstringModel variant: max, high, medium, low, xhigh. Normalized to supported values
maxTokensnumberMax response tokens
thinkingobjectAnthropic extended thinking
reasoningEffortstringOpenAI reasoning: none, minimal, low, medium, high, xhigh. Normalized to supported values
textVerbositystringText verbosity: low, medium, high
providerOptionsobjectProvider-specific options

Anthropic Extended Thinking

json
{
  "agents": {
    "oracle": { "thinking": { "type": "enabled", "budgetTokens": 200000 } }
  }
}

Agent Permissions

Control what tools an agent can use:

json
{
  "agents": {
    "explore": {
      "permission": {
        "edit": "deny",
        "bash": "ask",
        "webfetch": "allow"
      }
    }
  }
}
PermissionValues
editask / allow / deny
bashask / allow / deny or per-command: { "git": "allow", "rm": "deny" }
webfetchask / allow / deny
doom_loopask / allow / deny
external_directoryask / allow / deny

Fallback Models with Per-Model Settings

fallback_models accepts either a single model string or an array. Array entries can be plain strings or objects with individual model settings:

jsonc
{
  "agents": {
    "sisyphus": {
      "model": "anthropic/claude-opus-4-7",
      "fallback_models": [
        // Simple string fallback
        "openai/gpt-5.4",
        // Object with per-model settings
        {
          "model": "google/gemini-3.1-pro",
          "variant": "high",
          "temperature": 0.2
        },
        {
          "model": "anthropic/claude-sonnet-4-6",
          "thinking": { "type": "enabled", "budgetTokens": 64000 }
        }
      ]
    }
  }
}

Object entries support: model, variant, reasoningEffort, temperature, top_p, maxTokens, thinking.

File URIs for Prompts

Both prompt and prompt_append support loading content from files via file:// URIs. Category-level prompt_append supports the same URI forms.

jsonc
{
  "agents": {
    "sisyphus": {
      "prompt_append": "file:///absolute/path/to/prompt.txt"
    },
    "oracle": {
      "prompt": "file://./relative/to/project/prompt.md"
    },
    "explore": {
      "prompt_append": "file://~/home/dir/prompt.txt"
    }
  },
  "categories": {
    "custom": {
      "model": "anthropic/claude-sonnet-4-6",
      "prompt_append": "file://./category-context.md"
    }
  }
}

Paths can be absolute (file:///abs/path), relative to project root (file://./rel/path), or home-relative (file://~/home/path). If a file URI cannot be decoded, resolved, or read, OmO inserts a warning placeholder into the prompt instead of failing hard.

Categories

Domain-specific model delegation used by the task() tool. When Sisyphus delegates work, it picks a category, not a model name.

Built-in Categories

CategoryDefault ModelDescription
visual-engineeringgoogle/gemini-3.1-pro (high)Frontend, UI/UX, design, animation
ultrabrainopenai/gpt-5.4 (xhigh)Deep logical reasoning, complex architecture
deepopenai/gpt-5.4 (medium)Autonomous problem-solving, thorough research
artistrygoogle/gemini-3.1-pro (high)Creative/unconventional approaches
quickopenai/gpt-5.4-miniTrivial tasks, typo fixes, single-file changes
unspecified-lowanthropic/claude-sonnet-4-6General tasks, low effort
unspecified-highanthropic/claude-opus-4-7 (max)General tasks, high effort
writinggoogle/gemini-3-flashDocumentation, prose, technical writing

Note: Built-in defaults only apply if the category is present in your config. Otherwise the system default model is used.

Category Options

OptionTypeDefaultDescription
modelstring-Model override
fallback_modelsstring|array-Fallback models on API errors. Supports strings or mixed arrays of strings and object entries with per-model settings
temperaturenumber-Sampling temperature
top_pnumber-Top-p sampling
maxTokensnumber-Max response tokens
thinkingobject-Anthropic extended thinking
reasoningEffortstring-OpenAI reasoning effort. Unsupported values are normalized
textVerbositystring-Text verbosity
toolsarray-Allowed tools
prompt_appendstring-Append to system prompt
variantstring-Model variant. Unsupported values are normalized
descriptionstring-Shown in task() tool prompt
is_unstable_agentbooleanfalseForce background mode + monitoring. Auto-enabled for Gemini models.

Disable categories: { "disabled_categories": ["ultrabrain"] }

Model Resolution

Runtime priority:

  1. UI-selected model - model chosen in the OpenCode UI, for primary agents
  2. User override - model set in config → used exactly as-is. Even on cold cache, explicit user configuration takes precedence over hardcoded fallback chains
  3. Category default - model inherited from the assigned category config
  4. User fallback_models - user-configured fallback list is tried before built-in fallback chains
  5. Provider fallback chain - built-in provider/model chain from OmO source
  6. System default - OpenCode's configured default model

Model Settings Compatibility

Model settings are compatibility-normalized against model capabilities instead of failing hard.

Normalized fields:

  • variant - downgraded to the closest supported value
  • reasoningEffort - downgraded to the closest supported value, or removed if unsupported
  • temperature - removed if unsupported by the model metadata
  • top_p - removed if unsupported by the model metadata
  • maxTokens - capped to the model's reported max output limit
  • thinking - removed if the target model does not support thinking

Examples:

  • Claude models do not support reasoningEffort - it is removed automatically
  • GPT-4.1 does not support reasoning - reasoningEffort is removed
  • o-series models support none through high - xhigh is downgraded to high
  • GPT-5 supports none, minimal, low, medium, high, xhigh - all pass through

Capability data comes from provider runtime metadata first. OmO also ships bundled models.dev-backed capability data, supports a refreshable local models.dev cache, and falls back to heuristic family detection plus alias rules when exact metadata is unavailable. bunx oh-my-opencode doctor surfaces capability diagnostics and warns when a configured model relies on compatibility fallback.

Agent Provider Chains

AgentDefault ModelProvider Priority
Sisyphusclaude-opus-4-7anthropic|github-copilot|opencode/claude-opus-4-7 (max)opencode-go/kimi-k2.5kimi-for-coding/k2p5opencode|moonshotai|moonshotai-cn|firmware|ollama-cloud|aihubmix/kimi-k2.5openai|github-copilot|opencode/gpt-5.4 (medium)zai-coding-plan|opencode/glm-5opencode/big-pickle
Hephaestusgpt-5.4gpt-5.4 (medium)
oraclegpt-5.4openai|github-copilot|opencode/gpt-5.4 (high)google|github-copilot|opencode/gemini-3.1-pro (high)anthropic|github-copilot|opencode/claude-opus-4-7 (max)opencode-go/glm-5
librariangpt-5.4-mini-fastopenai/gpt-5.4-mini-fastopencode-go|vercel/minimax-m2.7-highspeedopencode-go|vercel/minimax-m2.7anthropic|opencode|vercel/claude-haiku-4-5openai|opencode|vercel/gpt-5.4-nano
exploregpt-5.4-mini-fastopenai/gpt-5.4-mini-fastopencode-go|vercel/minimax-m2.7-highspeedopencode-go|vercel/minimax-m2.7anthropic|opencode|vercel/claude-haiku-4-5openai|opencode|vercel/gpt-5.4-nano
multimodal-lookergpt-5.4openai|opencode/gpt-5.4 (medium)opencode-go/kimi-k2.5zai-coding-plan/glm-4.6vopenai|github-copilot|opencode/gpt-5-nano
Prometheusclaude-opus-4-7anthropic|github-copilot|opencode/claude-opus-4-7 (max)openai|github-copilot|opencode/gpt-5.4 (high)opencode-go/glm-5google|github-copilot|opencode/gemini-3.1-pro
Metisclaude-opus-4-7anthropic|github-copilot|opencode/claude-opus-4-7 (max)openai|github-copilot|opencode/gpt-5.4 (high)opencode-go/glm-5kimi-for-coding/k2p5
Momusgpt-5.4openai|github-copilot|opencode/gpt-5.4 (xhigh)anthropic|github-copilot|opencode/claude-opus-4-7 (max)google|github-copilot|opencode/gemini-3.1-pro (high)opencode-go/glm-5
Atlasclaude-sonnet-4-6anthropic|github-copilot|opencode/claude-sonnet-4-6opencode-go/kimi-k2.5openai|github-copilot|opencode/gpt-5.4 (medium)opencode-go/minimax-m2.7

Category Provider Chains

CategoryDefault ModelProvider Priority
visual-engineeringgemini-3.1-progoogle|github-copilot|opencode/gemini-3.1-pro (high)zai-coding-plan|opencode/glm-5anthropic|github-copilot|opencode/claude-opus-4-7 (max)opencode-go/glm-5kimi-for-coding/k2p5
ultrabraingpt-5.4openai|opencode/gpt-5.4 (xhigh)google|github-copilot|opencode/gemini-3.1-pro (high)anthropic|github-copilot|opencode/claude-opus-4-7 (max)opencode-go/glm-5
deepgpt-5.4openai|github-copilot|venice|opencode/gpt-5.4 (medium)anthropic|github-copilot|opencode/claude-opus-4-7 (max)google|github-copilot|opencode/gemini-3.1-pro (high)
artistrygemini-3.1-progoogle|github-copilot|opencode/gemini-3.1-pro (high)anthropic|github-copilot|opencode/claude-opus-4-7 (max)openai|github-copilot|opencode/gpt-5.4
quickgpt-5.4-miniopenai|github-copilot|opencode/gpt-5.4-minianthropic|github-copilot|opencode/claude-haiku-4-5google|github-copilot|opencode/gemini-3-flashopencode-go/minimax-m2.7opencode/gpt-5-nano
unspecified-lowclaude-sonnet-4-6anthropic|github-copilot|opencode/claude-sonnet-4-6openai|opencode/gpt-5.3-codex (medium)opencode-go/kimi-k2.5google|github-copilot|opencode/gemini-3-flashopencode-go/minimax-m2.7
unspecified-highclaude-opus-4-7anthropic|github-copilot|opencode/claude-opus-4-7 (max)openai|github-copilot|opencode/gpt-5.4 (high)zai-coding-plan|opencode/glm-5kimi-for-coding/k2p5opencode-go/glm-5opencode/kimi-k2.5opencode|moonshotai|moonshotai-cn|firmware|ollama-cloud|aihubmix/kimi-k2.5
writinggemini-3-flashgoogle|github-copilot|opencode/gemini-3-flashopencode-go/kimi-k2.5anthropic|github-copilot|opencode/claude-sonnet-4-6opencode-go/minimax-m2.7

Run bunx oh-my-opencode doctor --verbose to see effective model resolution for your config.


Task System

Background Tasks

Control parallel agent execution and concurrency limits.

json
{
  "background_task": {
    "defaultConcurrency": 5,
    "staleTimeoutMs": 180000,
    "providerConcurrency": { "anthropic": 3, "openai": 5, "google": 10 },
    "modelConcurrency": { "anthropic/claude-opus-4-7": 2 }
  }
}
OptionDefaultDescription
defaultConcurrency-Max concurrent tasks (all providers)
staleTimeoutMs180000Interrupt tasks with no activity (min: 60000)
providerConcurrency-Per-provider limits (key = provider name)
modelConcurrency-Per-model limits (key = provider/model). Overrides provider limits.

Priority: modelConcurrency > providerConcurrency > defaultConcurrency

Sisyphus Agent

Configure the main orchestration system.

json
{
  "sisyphus_agent": {
    "disabled": false,
    "default_builder_enabled": false,
    "planner_enabled": true,
    "replace_plan": true
  }
}
OptionDefaultDescription
disabledfalseDisable all Sisyphus orchestration, restore original build/plan
default_builder_enabledfalseEnable OpenCode-Builder agent (off by default)
planner_enabledtrueEnable Prometheus (Planner) agent
replace_plantrueDemote default plan agent to subagent mode

Sisyphus agents can also be customized under agents using their names: Sisyphus, OpenCode-Builder, Prometheus (Planner), Metis (Plan Consultant).

Sisyphus Tasks

Enable the Sisyphus Tasks system for cross-session task tracking.

json
{
  "sisyphus": {
    "tasks": {
      "enabled": false,
      "storage_path": ".sisyphus/tasks",
      "claude_code_compat": false
    }
  }
}
OptionDefaultDescription
enabledfalseEnable Sisyphus Tasks system
storage_path.sisyphus/tasksStorage path (relative to project root)
claude_code_compatfalseEnable Claude Code path compatibility mode

Features

Skills

Skills bring domain-specific expertise and embedded MCPs.

Built-in skills: playwright, playwright-cli, agent-browser, dev-browser, git-master, frontend-ui-ux

Disable built-in skills: { "disabled_skills": ["playwright"] }

Skills Configuration

json
{
  "skills": {
    "sources": [
      { "path": "./my-skills", "recursive": true },
      "https://example.com/skill.yaml"
    ],
    "enable": ["my-skill"],
    "disable": ["other-skill"],
    "my-skill": {
      "description": "What it does",
      "template": "Custom prompt template",
      "from": "source-file.ts",
      "model": "custom/model",
      "agent": "custom-agent",
      "subtask": true,
      "argument-hint": "usage hint",
      "license": "MIT",
      "compatibility": ">= 3.0.0",
      "metadata": { "author": "Your Name" },
      "allowed-tools": ["read", "bash"]
    }
  }
}
sources optionDefaultDescription
path-Local path or remote URL
recursivefalseRecurse into subdirectories
glob-Glob pattern for file selection

Hooks

Disable built-in hooks via disabled_hooks:

json
{ "disabled_hooks": ["comment-checker"] }

Available hooks: todo-continuation-enforcer, context-window-monitor, session-recovery, session-notification, comment-checker, grep-output-truncator, tool-output-truncator, directory-agents-injector, directory-readme-injector, empty-task-response-detector, think-mode, anthropic-context-window-limit-recovery, rules-injector, background-notification, auto-update-checker, startup-toast, keyword-detector, agent-usage-reminder, non-interactive-env, interactive-bash-session, compaction-context-injector, thinking-block-validator, claude-code-hooks, ralph-loop, preemptive-compaction, auto-slash-command, sisyphus-junior-notepad, no-sisyphus-gpt, start-work, runtime-fallback

Notes:

  • directory-agents-injector - auto-disabled on OpenCode 1.1.37+ (native AGENTS.md support)
  • no-sisyphus-gpt - do not disable. It blocks incompatible GPT models for Sisyphus while allowing the dedicated GPT-5.4 prompt path.
  • startup-toast is a sub-feature of auto-update-checker. Disable just the toast by adding startup-toast to disabled_hooks.
  • session-recovery - automatically recovers from recoverable session errors (missing tool results, unavailable tools, thinking block violations). Shows toast notifications during recovery. Enable experimental.auto_resume for automatic retry after recovery.

Commands

Disable built-in commands via disabled_commands:

json
{ "disabled_commands": ["init-deep", "start-work"] }

Available commands: init-deep, ralph-loop, ulw-loop, cancel-ralph, refactor, start-work, stop-continuation, handoff

Browser Automation

ProviderInterfaceInstallation
playwright (default)MCP toolsAuto-installed via npx
agent-browserBash CLIbun add -g agent-browser && agent-browser install

Switch provider:

json
{ "browser_automation_engine": { "provider": "agent-browser" } }

Tmux Integration

Run background subagents in separate tmux panes. Requires running inside tmux with opencode --port <port>.

json
{
  "tmux": {
    "enabled": true,
    "layout": "main-vertical",
    "main_pane_size": 60,
    "main_pane_min_width": 120,
    "agent_pane_min_width": 40
  }
}
OptionDefaultDescription
enabledfalseEnable tmux pane spawning
layoutmain-verticalmain-vertical / main-horizontal / tiled / even-horizontal / even-vertical
main_pane_size60Main pane % (20–80)
main_pane_min_width120Min main pane columns
agent_pane_min_width40Min agent pane columns

Git Master

Configure git commit behavior:

json
{ "git_master": { "commit_footer": true, "include_co_authored_by": true } }

Comment Checker

Customize the comment quality checker:

json
{
  "comment_checker": {
    "custom_prompt": "Your message. Use {{comments}} placeholder."
  }
}

Notification

Force-enable session notifications:

json
{ "notification": { "force_enable": true } }

force_enable (false) - force session-notification even if external notification plugins are detected.

MCPs

Built-in MCPs (enabled by default): websearch (Exa AI), context7 (library docs), grep_app (GitHub code search).

json
{ "disabled_mcps": ["websearch", "context7", "grep_app"] }

LSP

Configure Language Server Protocol integration:

json
{
  "lsp": {
    "typescript-language-server": {
      "command": ["typescript-language-server", "--stdio"],
      "extensions": [".ts", ".tsx"],
      "priority": 10,
      "env": { "NODE_OPTIONS": "--max-old-space-size=4096" },
      "initialization": {
        "preferences": { "includeInlayParameterNameHints": "all" }
      }
    },
    "pylsp": { "disabled": true }
  }
}
OptionTypeDescription
commandarrayCommand to start LSP server
extensionsarrayFile extensions (e.g. [".ts"])
prioritynumberPriority when multiple servers match
envobjectEnvironment variables
initializationobjectInit options passed to server
disabledbooleanDisable this server

Advanced

Runtime Fallback

Auto-switches to backup models on API errors.

Simple configuration (enable/disable with defaults):

json
{ "runtime_fallback": true }
{ "runtime_fallback": false }

Advanced configuration (full control):

json
{
  "runtime_fallback": {
    "enabled": true,
    "retry_on_errors": [400, 429, 503, 529],
    "max_fallback_attempts": 3,
    "cooldown_seconds": 60,
    "timeout_seconds": 30,
    "notify_on_fallback": true
  }
}
OptionDefaultDescription
enabledfalseEnable runtime fallback
retry_on_errors[400,429,503,529]HTTP codes that trigger fallback. Also handles classified provider key errors.
max_fallback_attempts3Max fallback attempts per session (1–20)
cooldown_seconds60Seconds before retrying a failed model
timeout_seconds30Seconds before forcing next fallback. Set to 0 to disable timeout-based escalation and provider retry message detection.
notify_on_fallbacktrueToast notification on model switch

Define fallback_models per agent or category:

json
{
  "agents": {
    "sisyphus": {
      "model": "anthropic/claude-opus-4-7",
      "fallback_models": [
        "openai/gpt-5.4",
        {
          "model": "google/gemini-3.1-pro",
          "variant": "high"
        }
      ]
    }
  }
}

fallback_models also supports object-style entries so you can attach settings to a specific fallback model:

json
{
  "agents": {
    "sisyphus": {
      "model": "anthropic/claude-opus-4-7",
      "fallback_models": [
        "openai/gpt-5.4",
        {
          "model": "anthropic/claude-sonnet-4-6",
          "variant": "high",
          "thinking": { "type": "enabled", "budgetTokens": 12000 }
        },
        {
          "model": "openai/gpt-5.3-codex",
          "reasoningEffort": "high",
          "temperature": 0.2,
          "top_p": 0.95,
          "maxTokens": 8192
        }
      ]
    }
  }
}

Mixed arrays are allowed, so string entries and object entries can appear together in the same fallback chain.

Object-style fallback_models

Object entries use the following shape:

FieldTypeDescription
modelstringFallback model ID. Provider prefix is optional when OmO can inherit the current/default provider.
variantstringExplicit variant override for this fallback entry.
reasoningEffortstringOpenAI reasoning effort override for this fallback entry.
temperaturenumberTemperature applied if this fallback model becomes active.
top_pnumberTop-p applied if this fallback model becomes active.
maxTokensnumberMax response tokens applied if this fallback model becomes active.
thinkingobjectAnthropic thinking config applied if this fallback model becomes active.

Per-model settings are fallback-only. They are promoted only when that specific fallback model is actually selected, so they do not override your primary model settings when the primary model resolves successfully.

thinking uses the same shape as the normal agent/category option:

FieldTypeDescription
typestringenabled or disabled
budgetTokensnumberOptional Anthropic thinking budget

Object entries can also omit the provider prefix when OmO can infer it from the current/default provider. If you provide both inline variant syntax in model and an explicit variant field, the explicit variant field wins.

Full examples

1. Simple string chain

Use strings when you only need an ordered fallback chain:

json
{
  "agents": {
    "atlas": {
      "model": "anthropic/claude-sonnet-4-6",
      "fallback_models": [
        "anthropic/claude-haiku-4-5",
        "openai/gpt-5.4",
        "google/gemini-3.1-pro"
      ]
    }
  }
}

2. Same-provider shorthand

If the primary model already establishes the provider, fallback entries can omit the prefix:

json
{
  "agents": {
    "atlas": {
      "model": "openai/gpt-5.4",
      "fallback_models": [
        "gpt-5.4-mini",
        {
          "model": "gpt-5.3-codex",
          "reasoningEffort": "medium",
          "maxTokens": 4096
        }
      ]
    }
  }
}

In this example OmO treats gpt-5.4-mini and gpt-5.3-codex as OpenAI fallback entries because the current/default provider is already openai.

3. Mixed cross-provider chain

Mix string entries and object entries when only some fallback models need special settings:

json
{
  "agents": {
    "sisyphus": {
      "model": "anthropic/claude-opus-4-7",
      "fallback_models": [
        "openai/gpt-5.4",
        {
          "model": "anthropic/claude-sonnet-4-6",
          "variant": "high",
          "thinking": { "type": "enabled", "budgetTokens": 12000 }
        },
        {
          "model": "google/gemini-3.1-pro",
          "variant": "high"
        }
      ]
    }
  }
}

4. Category-level fallback chain

fallback_models works the same way under categories:

json
{
  "categories": {
    "deep": {
      "model": "openai/gpt-5.3-codex",
      "fallback_models": [
        {
          "model": "openai/gpt-5.4",
          "reasoningEffort": "xhigh",
          "maxTokens": 12000
        },
        {
          "model": "anthropic/claude-opus-4-7",
          "variant": "max",
          "temperature": 0.2
        },
        "google/gemini-3.1-pro(high)"
      ]
    }
  }
}

5. Full object entry with every supported field

This shows every supported object-style parameter in one place:

json
{
  "agents": {
    "oracle": {
      "model": "openai/gpt-5.4",
      "fallback_models": [
        {
          "model": "openai/gpt-5.3-codex(low)",
          "variant": "xhigh",
          "reasoningEffort": "high",
          "temperature": 0.3,
          "top_p": 0.9,
          "maxTokens": 8192,
          "thinking": {
            "type": "disabled"
          }
        }
      ]
    }
  }
}

In this example the explicit "variant": "xhigh" overrides the inline (low) suffix in "model".

This final example is a complete shape reference. In real configs, prefer provider-appropriate settings:

  • use reasoningEffort for OpenAI reasoning models
  • use thinking for Anthropic thinking-capable models
  • use variant, temperature, top_p, and maxTokens only when that fallback model supports them

Model Capabilities

OmO can refresh a local models.dev capability snapshot on startup. This cache is controlled by model_capabilities.

jsonc
{
  "model_capabilities": {
    "enabled": true,
    "auto_refresh_on_start": true,
    "refresh_timeout_ms": 5000,
    "source_url": "https://models.dev/api.json"
  }
}
OptionDefault behaviorDescription
enabledenabled unless explicitly set to falseMaster switch for model capability refresh behavior
auto_refresh_on_startrefresh on startup unless explicitly set to falseRefresh the local models.dev cache during startup checks
refresh_timeout_ms5000Timeout for the startup refresh attempt
source_urlhttps://models.dev/api.jsonOverride the models.dev source URL

Notes:

  • Startup refresh runs through the auto-update checker hook.
  • Manual refresh is available via bunx oh-my-opencode refresh-model-capabilities.
  • Provider runtime metadata still takes priority when OmO resolves capabilities for compatibility checks.

Hashline Edit

Replaces the built-in Edit tool with a hash-anchored version using LINE#ID references to prevent stale-line edits. Disabled by default.

json
{ "hashline_edit": true }

When enabled, two companion hooks are active: hashline-read-enhancer (annotates Read output) and hashline-edit-diff-enhancer (shows diffs). Opt-in by setting hashline_edit: true. Disable the companion hooks individually via disabled_hooks if needed.

Experimental

json
{
  "experimental": {
    "truncate_all_tool_outputs": false,
    "aggressive_truncation": false,
    "auto_resume": false,
    "disable_omo_env": false,
    "task_system": true,
    "dynamic_context_pruning": {
      "enabled": false,
      "notification": "detailed",
      "turn_protection": { "enabled": true, "turns": 3 },
      "protected_tools": [
        "task",
        "todowrite",
        "todoread",
        "lsp_rename",
        "session_read",
        "session_write",
        "session_search"
      ],
      "strategies": {
        "deduplication": { "enabled": true },
        "supersede_writes": { "enabled": true, "aggressive": false },
        "purge_errors": { "enabled": true, "turns": 5 }
      }
    }
  }
}
OptionDefaultDescription
truncate_all_tool_outputsfalseTruncate all tool outputs (not just whitelisted)
aggressive_truncationfalseAggressively truncate when token limit exceeded
auto_resumefalseAuto-resume after thinking block recovery
disable_omo_envfalseDisable auto-injected <omo-env> block (date/time/locale). Improves cache hit rate.
task_systemfalseEnable Sisyphus task system
dynamic_context_pruning.enabledfalseAuto-prune old tool outputs to manage context window
dynamic_context_pruning.notificationdetailedPruning notifications: off / minimal / detailed
turn_protection.turns3Recent turns protected from pruning (1–10)
strategies.deduplicationtrueRemove duplicate tool calls
strategies.supersede_writestruePrune write inputs when file later read
strategies.supersede_writes.aggressivefalsePrune any write if ANY subsequent read exists
strategies.purge_errors.turns5Turns before pruning errored tool inputs

Reference

Environment Variables

VariableDescription
OPENCODE_CONFIG_DIROverride OpenCode config directory (useful for profile isolation)
OMO_SEND_ANONYMOUS_TELEMETRYSet to 0, false, or no to disable anonymous telemetry
OMO_DISABLE_POSTHOGLegacy telemetry opt-out flag. Set to 1 or true to disable PostHog
POSTHOG_API_KEYOptional override for the built-in PostHog project API key
POSTHOG_HOSTOverride the PostHog ingestion host. Defaults to https://us.i.posthog.com

Provider-Specific

Google Auth

Install opencode-antigravity-auth for Google Gemini. Provides multi-account load balancing, dual quota, and variant-based thinking.

Ollama

Must disable streaming to avoid JSON parse errors:

json
{
  "agents": {
    "explore": { "model": "ollama/qwen3-coder", "stream": false }
  }
}

Common models: ollama/qwen3-coder, ollama/ministral-3:14b, ollama/lfm2.5-thinking

See Ollama Troubleshooting for JSON Parse error: Unexpected EOF issues.