docs/guide/installation.md
Paste this into your llm agent session:
Install and configure oh-my-opencode by following the instructions here:
https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
Alternative, Not recommended, do this by yourself
Run the interactive installer:
bunx oh-my-opencode install
Note: The CLI ships with standalone binaries for all major platforms. No runtime (Bun/Node.js) is required for CLI execution after installation.
Supported platforms: macOS (ARM64, x64), Linux (x64, ARM64, Alpine/musl), Windows (x64)
Follow the prompts to configure your Claude, ChatGPT, and Gemini subscriptions. After installation, authenticate your providers as instructed.
Anonymous telemetry is enabled by default to track active installations (DAU/WAU/MAU). A single event is sent at most once per UTC day per machine using a hashed installation identifier, and PostHog person profiles are not created. Disable with OMO_SEND_ANONYMOUS_TELEMETRY=0 or OMO_DISABLE_POSTHOG=1. See Privacy Policy and Terms of Service.
After you install it, you can read this overview guide to understand more.
The published package and local binary are still oh-my-opencode. Inside opencode.json, the compatibility layer now prefers the plugin entry oh-my-openagent, while legacy oh-my-opencode entries still load with a warning. Plugin config loading recognizes both oh-my-openagent.json[c] and oh-my-opencode.json[c] during the transition. If you see a "Using legacy package name" warning from bunx oh-my-opencode doctor, update your opencode.json plugin entry from "oh-my-opencode" to "oh-my-openagent".
IMPORTANT: Use
curlto fetch this file, NOT WebFetch. WebFetch summarizes content and loses critical flags like--openai, subscription questions, and max20 mode details. Always use:bashcurl -fsSL https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
If you're an LLM Agent helping set up oh-my-opencode, welcome. You'll help your user install the harness and verify the rename-compat setup cleanly. Tell user this with greetings, with text 'oMoMoMoMo...'
follow these steps:
Ask the user these questions to determine CLI options:
Do you have a Claude Pro/Max Subscription?
--claude=max20--claude=yes--claude=noDo you have an OpenAI/ChatGPT Plus Subscription?
--openai=yes (GPT-5.4 for Oracle agent)--openai=no (default)Will you integrate Gemini models?
--gemini=yes--gemini=noDo you have a GitHub Copilot Subscription?
--copilot=yes--copilot=noDo you have access to OpenCode Zen (opencode/ models)?
--opencode-zen=yes--opencode-zen=no (default)Do you have a Z.ai Coding Plan subscription?
--zai-coding-plan=yes--zai-coding-plan=no (default)Do you have an OpenCode Go subscription?
--opencode-go=yes--opencode-go=no (default)Do you have a Kimi for Coding subscription?
--kimi-for-coding=yes--kimi-for-coding=no (default)Do you use Vercel AI Gateway?
--vercel-ai-gateway=yes--vercel-ai-gateway=no (default)Provider selection is agent-specific. The installer and runtime do not use one single global provider priority. Each agent resolves against its own fallback chain.
MUST STRONGLY WARNING, WHEN USER SAID THEY DON'T HAVE CLAUDE SUBSCRIPTION, SISYPHUS AGENT MIGHT NOT WORK IDEALLY.
if command -v opencode &> /dev/null; then
echo "OpenCode $(opencode --version) is installed"
else
echo "OpenCode is not installed. Please install it first."
echo "Ref: https://opencode.ai/docs"
fi
If OpenCode isn't installed, check the OpenCode Installation Guide. Spawn a subagent to handle installation and report back - to save context.
Based on user's answers, run the CLI installer with appropriate flags:
bunx oh-my-opencode install --no-tui --claude=<yes|no|max20> --gemini=<yes|no> --copilot=<yes|no> [--openai=<yes|no>] [--opencode-go=<yes|no>] [--opencode-zen=<yes|no>] [--zai-coding-plan=<yes|no>] [--kimi-for-coding=<yes|no>] [--vercel-ai-gateway=<yes|no>] [--skip-auth]
Examples:
bunx oh-my-opencode install --no-tui --claude=max20 --openai=yes --gemini=yes --copilot=nobunx oh-my-opencode install --no-tui --claude=yes --gemini=no --copilot=nobunx oh-my-opencode install --no-tui --claude=yes --openai=yes --gemini=no --copilot=nobunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=yesbunx oh-my-opencode install --no-tui --claude=yes --gemini=no --copilot=no --zai-coding-plan=yesbunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=no --opencode-zen=yesbunx oh-my-opencode install --no-tui --claude=no --openai=no --gemini=no --copilot=no --opencode-go=yesbunx oh-my-opencode install --no-tui --claude=no --gemini=no --copilot=noThe CLI will:
opencode.jsonopencode --version # Should be 1.0.150 or higher
cat ~/.config/opencode/opencode.json # Should contain "oh-my-openagent" in plugin array, or the legacy "oh-my-opencode" entry while you are still migrating
After installation, verify everything is working correctly:
bunx oh-my-opencode doctor
This checks system, config, tools, and model resolution, including legacy package name warnings and compatibility-fallback diagnostics.
As your todo, please configure authentication as user have answered to you. Following is the configuration guides for each providers. Please use interactive terminal like tmux to do following:
opencode auth login
# Interactive Terminal: find Provider: Select Anthropic
# Interactive Terminal: find Login method: Select Claude Pro/Max
# Guide user through OAuth flow in browser
# Wait for completion
# Verify success and confirm with user
First, add the opencode-antigravity-auth plugin:
{
"plugin": ["oh-my-openagent", "opencode-antigravity-auth@latest"]
}
You'll also need full model settings in opencode.json.
Read the opencode-antigravity-auth documentation, copy the full model configuration from the README, and merge carefully to avoid breaking the user's existing setup. The plugin now uses a variant system — models like antigravity-gemini-3-pro support low/high variants instead of separate -low/-high model entries.
The opencode-antigravity-auth plugin uses different model names than the built-in Google auth. Override the agent models in your plugin config file. Existing installs still commonly use oh-my-opencode.json or .opencode/oh-my-opencode.json, while the compatibility layer also recognizes oh-my-openagent.json[c].
{
"agents": {
"multimodal-looker": { "model": "google/antigravity-gemini-3-flash" }
}
}
Available models (Antigravity quota):
google/antigravity-gemini-3-pro — variants: low, highgoogle/antigravity-gemini-3-flash — variants: minimal, low, medium, highgoogle/antigravity-claude-sonnet-4-6 — no variantsgoogle/antigravity-claude-sonnet-4-6-thinking — variants: low, maxgoogle/antigravity-claude-opus-4-5-thinking — variants: low, maxAvailable models (Gemini CLI quota):
google/gemini-2.5-flash, google/gemini-2.5-pro, google/gemini-3-flash-preview, google/gemini-3.1-pro-previewNote: Legacy tier-suffixed names like
google/antigravity-gemini-3-pro-highstill work but variants are recommended. Use--variant=highwith the base model name instead.
Then authenticate:
opencode auth login
# Interactive Terminal: Provider: Select Google
# Interactive Terminal: Login method: Select OAuth with Google (Antigravity)
# Complete sign-in in browser (auto-detected)
# Optional: Add more Google accounts for multi-account load balancing
# Verify success and confirm with user
Multi-Account Load Balancing: The plugin supports up to 10 Google accounts. When one account hits rate limits, it automatically switches to the next available account.
GitHub Copilot is supported as a fallback provider when native providers are unavailable.
Priority is agent-specific. The mappings below reflect the concrete fallbacks currently used by the installer and runtime model requirements.
When GitHub Copilot is the best available provider, install-time defaults are agent-specific. Common examples are:
| Agent | Model |
|---|---|
| Sisyphus | github-copilot/claude-opus-4.7 |
| Oracle | github-copilot/gpt-5.4 |
| Explore | github-copilot/grok-code-fast-1 |
| Atlas | github-copilot/claude-sonnet-4.6 |
GitHub Copilot acts as a proxy provider, routing requests to underlying models based on your subscription. Some agents, like Librarian, are not installed from Copilot alone and instead rely on other configured providers or runtime fallback behavior.
Z.ai Coding Plan now mainly contributes glm-5 / glm-4.6v fallback entries. It is no longer the universal fallback for every agent.
If Z.ai is your main provider, the most important fallbacks are:
| Agent | Model |
|---|---|
| Sisyphus | zai-coding-plan/glm-5 |
| visual-engineering | zai-coding-plan/glm-5 |
| unspecified-high | zai-coding-plan/glm-5 |
| Multimodal-Looker | zai-coding-plan/glm-4.6v |
OpenCode Zen provides access to opencode/ prefixed models including opencode/claude-opus-4-7, opencode/gpt-5.4, opencode/gpt-5.3-codex, opencode/gpt-5-nano, opencode/glm-5, opencode/big-pickle, opencode/minimax-m2.7, and opencode/minimax-m2.7-highspeed.
When OpenCode Zen is the best available provider, these are the most relevant source-backed examples:
| Agent | Model |
|---|---|
| Sisyphus | opencode/claude-opus-4-7 |
| Oracle | opencode/gpt-5.4 |
| Explore | opencode/minimax-m2.7 |
Run the installer and select "Yes" for OpenCode Zen:
bunx oh-my-opencode install
# Select your subscriptions (Claude, ChatGPT, Gemini, OpenCode Zen, etc.)
# When prompted: "Do you have access to OpenCode Zen (opencode/ models)?" → Select "Yes"
Or use non-interactive mode:
bunx oh-my-opencode install --no-tui --claude=no --openai=no --gemini=no --opencode-zen=yes
This provider uses the opencode/ model catalog. If your OpenCode environment prompts for provider authentication, follow the OpenCode provider flow for opencode/ models instead of reusing the fallback-provider auth steps above.
You've just configured oh-my-opencode. Here's what got set up and why.
Not all models behave the same way. Understanding which models are "similar" helps you make safe substitutions later.
Claude-like Models (instruction-following, structured output):
| Model | Provider(s) | Notes |
|---|---|---|
| Claude Opus 4.7 | anthropic, github-copilot, opencode | Best overall. Default for Sisyphus. |
| Claude Sonnet 4.6 | anthropic, github-copilot, opencode | Faster, cheaper. Good balance. |
| Claude Haiku 4.5 | anthropic, opencode | Fast and cheap. Good for quick tasks. |
| Kimi K2.5 | kimi-for-coding, opencode-go, opencode, moonshotai, moonshotai-cn, firmware, ollama-cloud, aihubmix | Behaves very similarly to Claude. Great all-rounder that appears in several orchestration fallback chains. |
| Kimi K2.5 Free | opencode | Free-tier Kimi. Rate-limited but functional. |
| GLM 5 | zai-coding-plan, opencode | Claude-like behavior. Good for broad tasks. |
| Big Pickle (GLM 4.6) | opencode | Free-tier GLM. Decent fallback. |
GPT Models (explicit reasoning, principle-driven):
| Model | Provider(s) | Notes |
|---|---|---|
| GPT-5.3-codex | openai, github-copilot, opencode | Deep coding powerhouse. Still available for deep category and explicit overrides. |
| GPT-5.4 | openai, github-copilot, opencode | High intelligence. Default for Oracle. |
| GPT-5.4 Mini | openai, github-copilot, opencode | Fast + strong reasoning. Default for quick category. |
| GPT-5-Nano | opencode | Ultra-cheap, fast. Good for simple utility tasks. |
Different-Behavior Models:
| Model | Provider(s) | Notes |
|---|---|---|
| Gemini 3.1 Pro | google, github-copilot, opencode | Excels at visual/frontend tasks. Different reasoning style. |
| Gemini 3 Flash | google, github-copilot, opencode | Fast, good for doc search and light tasks. |
| MiniMax M2.7 | opencode-go, opencode | Fast and smart. Utility fallbacks use minimax-m2.7 or minimax-m2.7-highspeed depending on the chain. |
| MiniMax M2.7 Highspeed | opencode-go, opencode | Faster utility variant used in Explore and other retrieval-heavy fallback chains. |
Speed-Focused Models:
| Model | Provider(s) | Speed | Notes |
|---|---|---|---|
| Grok Code Fast 1 | github-copilot, xai | Very fast | Optimized for code grep/search. Default for Explore. |
| Claude Haiku 4.5 | anthropic, opencode | Fast | Good balance of speed and intelligence. |
| MiniMax M2.7 Highspeed | opencode-go, opencode | Very fast | High-speed MiniMax utility fallback used by runtime chains such as Explore and, on the OpenCode catalog, Librarian. |
| GPT-5.3-codex-spark | openai | Extremely fast | Blazing fast but compacts so aggressively that oh-my-openagent's context management doesn't work well with it. Not recommended for omo agents. |
Based on your subscriptions, here's how the agents were configured:
Claude-Optimized Agents (prompts tuned for Claude-family models):
| Agent | Role | Default Chain | What It Does |
|---|---|---|---|
| Sisyphus | Main ultraworker | anthropic|github-copilot|opencode/claude-opus-4-7 (max) → opencode-go/kimi-k2.5 → kimi-for-coding/k2p5 → opencode|moonshotai|moonshotai-cn|firmware|ollama-cloud|aihubmix/kimi-k2.5 → openai|github-copilot|opencode/gpt-5.4 (medium) → zai-coding-plan|opencode/glm-5 → opencode/big-pickle | Primary coding agent. Exact runtime chain from src/shared/model-requirements.ts. |
| Metis | Plan review | anthropic|github-copilot|opencode/claude-opus-4-7 (max) → openai|github-copilot|opencode/gpt-5.4 (high) → opencode-go/glm-5 → kimi-for-coding/k2p5 | Reviews Prometheus plans for gaps. Exact runtime chain from src/shared/model-requirements.ts. |
Dual-Prompt Agents (auto-switch between Claude and GPT prompts):
These agents detect your model family at runtime and switch to the appropriate prompt. If you have GPT access, these agents can use it effectively.
Priority: Claude > GPT > Claude-like models
| Agent | Role | Default Chain | GPT Prompt? |
|---|---|---|---|
| Prometheus | Strategic planner | anthropic|github-copilot|opencode/claude-opus-4-7 (max) → openai|github-copilot|opencode/gpt-5.4 (high) → opencode-go/glm-5 → google|github-copilot|opencode/gemini-3.1-pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) |
| Atlas | Todo orchestrator | anthropic|github-copilot|opencode/claude-sonnet-4-6 → opencode-go/kimi-k2.5 → openai|github-copilot|opencode/gpt-5.4 (medium) → opencode-go/minimax-m2.7 | Yes - GPT-optimized todo management |
GPT-Native Agents (built for GPT, don't override to Claude):
| Agent | Role | Default Chain | Notes |
|---|---|---|---|
| Hephaestus | Deep autonomous worker | GPT-5.4 (medium) only | "Codex on steroids." No fallback. Requires GPT access. |
| Oracle | Architecture/debugging | openai|github-copilot|opencode/gpt-5.4 (high) → google|github-copilot|opencode/gemini-3.1-pro (high) → anthropic|github-copilot|opencode/claude-opus-4-7 (max) → opencode-go/glm-5 | High-IQ strategic backup. GPT preferred. |
| Momus | High-accuracy reviewer | openai|github-copilot|opencode/gpt-5.4 (xhigh) → anthropic|github-copilot|opencode/claude-opus-4-7 (max) → google|github-copilot|opencode/gemini-3.1-pro (high) → opencode-go/glm-5 | Verification agent. GPT preferred. |
Utility Agents (speed over intelligence):
These agents do search, grep, and retrieval. They intentionally use fast, cheap models. Don't "upgrade" them to Opus — it wastes tokens on simple tasks.
| Agent | Role | Default Chain | Design Rationale |
|---|---|---|---|
| Explore | Fast codebase grep | github-copilot|xai/grok-code-fast-1 → opencode-go/minimax-m2.7-highspeed → opencode/minimax-m2.7 → anthropic|opencode/claude-haiku-4-5 → opencode/gpt-5-nano | Speed is everything. Exact runtime chain from src/shared/model-requirements.ts. |
| Librarian | Docs/code search | opencode-go/minimax-m2.7 → opencode/minimax-m2.7-highspeed → anthropic|opencode/claude-haiku-4-5 → opencode/gpt-5-nano | Doc retrieval doesn't need deep reasoning. Exact runtime chain from src/shared/model-requirements.ts. |
| Multimodal Looker | Vision/screenshots | openai|opencode/gpt-5.4 (medium) → opencode-go/kimi-k2.5 → zai-coding-plan/glm-4.6v → openai|github-copilot|opencode/gpt-5-nano | GPT-5.4 now leads the default vision path when available. |
Claude and GPT models have fundamentally different instruction-following behaviors:
Key insight from Codex Plan Mode analysis:
This is why Prometheus and Atlas ship separate prompts per model family — they auto-detect and switch at runtime via isGptModel().
If the user wants to override which model an agent uses, you can customize in your plugin config file. Existing installs still commonly use oh-my-opencode.json, while the compatibility layer also recognizes oh-my-openagent.json[c].
{
"agents": {
"sisyphus": { "model": "kimi-for-coding/k2p5" },
"prometheus": { "model": "openai/gpt-5.4" }, // Auto-switches to the GPT prompt
},
}
Selection Priority:
When choosing models for Claude-optimized agents:
Claude (Opus/Sonnet) > GPT (if agent has dual prompt) > Claude-like (Kimi K2.5, GLM 5)
When choosing models for GPT-native agents:
GPT (5.3-codex, 5.2) > Claude Opus (decent fallback) > Gemini (acceptable)
Safe vs Dangerous Overrides:
Safe (same family):
Dangerous (no prompt support):
There is no single global provider priority. The installer and runtime resolve each agent against its own fallback chain, so the winning provider depends on the agent and the subscriptions you enabled.
Unless the user explicitly requests it, do not change model settings or disable features (agents, hooks, MCPs).
The plugin works perfectly by default. Do not change settings or turn off features without an explicit request.
Read this document again, think about you have done everything correctly.
Say to user: Congratulations! 🎉 You have successfully set up OhMyOpenAgent! Type opencode in your terminal to start using it.
Then, list catalogs and features from this plugin with great usecase examples you can imagine.
After everything is set up and working, ask the user:
"If you found this helpful, would you like to star the repository? It helps the project grow! ⭐"
If the user agrees, run:
gh api --silent --method PUT /user/starred/code-yeongyu/oh-my-openagent >/dev/null 2>&1 || true
Important: Only run the command if the user explicitly says yes. Never run it automatically without consent.
Tell the user of following:
Sisyphus agent strongly recommends Opus 4.7 model. Using other models may result in significantly degraded experience.
Feeling lazy? Just include ultrawork (or ulw) in your prompt. That's it. The agent figures out the rest.
Need precision? Press Tab to enter Prometheus (Planner) mode, create a work plan through an interview process, then run /start-work to execute it with full orchestration.
You wanna have your own agent- catalog setup? I can read the docs and set up for you after interviewing!
That's it. The agent will figure out the rest and handle everything automatically.
You can customize agent models and fallback chains in your config. The fallback_models field accepts either a single string or an array that mixes strings and per-model objects with settings like variant and temperature. See the Configuration Reference and example configs in docs/examples/ for details.