src/setup/README.md
This document is the authoritative specification for IronClaw's onboarding
wizard. Any code change to src/setup/ must keep this document in sync.
If a future contributor or coding agent modifies setup behavior, update this
file first, then adjust the code to match.
ironclaw onboard [--skip-auth] [--channels-only] [--provider-only] [--quick]
Explicit invocation. Loads .env files, runs the wizard, exits.
ironclaw (first run, no database configured)
Auto-detection via check_onboard_needed() in main.rs. Skips onboarding
when ONBOARD_COMPLETED env var is set (written to ~/.ironclaw/.env by
the wizard). Otherwise triggers when no database is configured:
DATABASE_URL env var is setLIBSQL_PATH env var is set~/.ironclaw/ironclaw.db exists on diskAuto-triggered onboarding uses quick mode by default.
The --no-onboard CLI flag suppresses auto-detection.
1. Parse CLI args
2. If Command::Onboard → load .env, run wizard, exit
3. If Command::Run or no command:
a. Load .env files (dotenvy::dotenv() then load_ironclaw_env())
b. check_onboard_needed() → run wizard if needed
c. Config::from_env() → build config from env vars
d. Create SessionManager → load session token
e. ensure_authenticated() → validate session (NEAR AI only)
f. ... rest of agent startup
Critical ordering: .env files must be loaded (step 3a) before
Config::from_env() (step 3c) because bootstrap vars like
DATABASE_BACKEND live in ~/.ironclaw/.env.
Quick mode (--quick flag, or auto-triggered on first run) provides a
near-instant onboarding experience by auto-defaulting everything except
the LLM provider and model selection.
auto_setup_database() → libsql at ~/.ironclaw/ironclaw.db (zero prompts)
auto_setup_security() → keychain or env var (zero prompts)
Step 1/2: Inference Provider ← only interactive step
Step 2/2: Model Selection ← only interactive step
↓
save_and_summarize() → includes tip to run `ironclaw onboard`
auto_setup_database(): Uses existing env vars if set (DATABASE_URL
for postgres, LIBSQL_PATH for libsql) without prompting. Otherwise
defaults to libsql at ~/.ironclaw/ironclaw.db, creates the database,
and runs migrations silently. Falls back to interactive mode only when
just the postgres feature is compiled and no DATABASE_URL is set.
auto_setup_security(): Checks for existing SECRETS_MASTER_KEY
env var or OS keychain key. If neither exists, generates a new key and
stores it in the keychain (macOS) or env var (Linux/other). Zero prompts
except unavoidable macOS keychain dialogs.
.env preservation (fix for #751): write_bootstrap_env() now uses
upsert_bootstrap_vars() instead of save_bootstrap_env(), preserving
user-added variables like HTTP_HOST across re-onboarding.
The full 9-step wizard remains available via ironclaw onboard.
Step 1: Database Connection
Step 2: Security (master key)
Step 3: Inference Provider ← skipped if --skip-auth
Step 4: Model Selection
Step 5: Embeddings
Step 6: Channel Configuration
Step 7: Extensions (tools)
Step 8: Docker Sandbox
Step 9: Background Tasks (heartbeat)
↓
save_and_summarize()
--channels-only mode runs only Step 6, skipping everything else.
Module: wizard.rs → step_database()
Goal: Select backend, establish connection, run migrations.
Init delegation: Backend-specific connection logic lives in src/db/mod.rs
(connect_without_migrations()), not in the wizard. The wizard calls
test_database_connection() which delegates to the db module factory. Feature-flag
branching (#[cfg(feature = ...)]) is confined to src/db/mod.rs. PostgreSQL
validation (version >= 15, pgvector) is handled by validate_postgres() in
src/db/mod.rs.
Decision tree:
Both features compiled?
├─ Yes → DATABASE_BACKEND env var set?
│ ├─ Yes → use that backend
│ └─ No → interactive selection (PostgreSQL vs libSQL)
├─ Only postgres feature → prompt for DATABASE_URL, test connection
└─ Only libsql feature → prompt for path, test connection
PostgreSQL path:
DATABASE_URL from env or settingsconnect_without_migrations() (validates version, pgvector)libSQL path:
~/.ironclaw/ironclaw.db)connect_without_migrations()Invariant: After Step 1, self.db is Some(Arc<dyn Database>).
This is required for settings persistence in save_and_summarize().
Module: wizard.rs → step_security()
Goal: Configure encryption for API tokens and secrets.
Decision tree:
SECRETS_MASTER_KEY env var set?
├─ Yes → use env var, done
└─ No → try get_master_key() from OS keychain
├─ Ok(bytes) → cache in self.secrets_crypto, ask "use existing?"
│ ├─ Yes → done (keychain)
│ └─ No → clear cache, fall through to options
└─ Err → fall through to options
├─ OS Keychain: generate + store + build SecretsCrypto
├─ Env variable: generate + print export command
└─ Skip: disable secrets features
CRITICAL CAVEAT: macOS Keychain Dialogs
On macOS, security_framework::get_generic_password() can trigger TWO
system dialogs:
This is OS-level behavior we cannot prevent. To minimize pain:
Use get_master_key() not has_master_key() in step 2. Both call
the same underlying API, but get_master_key() returns the key bytes
so we can cache them. has_master_key() throws them away, forcing a
second keychain access later.
Build SecretsCrypto eagerly. When the keychain key is retrieved,
immediately construct SecretsCrypto and store in self.secrets_crypto.
Later calls to init_secrets_context() check this field first, avoiding
redundant keychain probes.
Never probe the keychain in read-only commands (e.g., ironclaw status).
The status command reports "env not set (keychain may be configured)"
rather than triggering system dialogs.
Invariant: After Step 2, self.secrets_crypto is Some if the user
chose Keychain or generated a new key. It may be None if the user chose
env-var mode or skipped secrets.
Module: wizard.rs → step_inference_provider()
Goal: Choose LLM backend and authenticate.
Providers:
| Provider | Auth Method | Secret Name | Env Var |
|---|---|---|---|
| NEAR AI Chat | Browser OAuth or session token | - | NEARAI_SESSION_TOKEN |
| NEAR AI Cloud | API key | llm_nearai_api_key | NEARAI_API_KEY |
| Anthropic | API key | anthropic_api_key | ANTHROPIC_API_KEY |
| OpenAI | API key | openai_api_key | OPENAI_API_KEY |
| Ollama | None | - | - |
| OpenRouter | API key | llm_openrouter_api_key | OPENROUTER_API_KEY |
| OpenAI-compatible | Optional API key | llm_compatible_api_key | LLM_API_KEY |
| AWS Bedrock | AWS credentials (IAM, SSO, instance roles) | - | - |
OpenRouter is a standalone registry provider (providers.json id "openrouter")
with its own secret name and env var. It is not stored as openai_compatible.
OpenRouter (setup.kind = "api_key" in providers.json):
https://openrouter.ai/api/v1setup_api_key_provider() with display name "OpenRouter"api_key_required: true)openai/gpt-4oAPI-key providers (setup_api_key_provider):
secret_input()init_secrets_context()self.llm_api_key for model fetching in Step 4selected_model on a same-backend re-run; clear it only when
switching to a different backendNEAR AI (setup_nearai):
session_manager.ensure_authenticated() which shows the auth menu:
private.near.ai, session token auth)cloud-api.near.ai, API key auth)~/.ironclaw/session.json.
Hosting providers can set NEARAI_SESSION_TOKEN env var directly (takes
precedence over file-based tokens).NEARAI_API_KEY saved to ~/.ironclaw/.env
(bootstrap) and encrypted secrets store (llm_nearai_api_key).
LlmConfig::resolve() auto-selects ChatCompletions mode when the
API key is present.self.llm_api_key caching: The wizard caches the API key as
Option<SecretString> so that Step 4 (model fetching) and Step 5
(embeddings) can use it without re-reading from the secrets store or
mutating environment variables.
Module: wizard.rs → step_model_selection()
Goal: Choose which model to use.
Flow:
self.settings.selected_modelModel fetchers pass the cached API key explicitly:
let cached = self.llm_api_key.as_ref().map(|k| k.expose_secret().to_string());
let models = fetch_anthropic_models(cached.as_deref()).await;
This avoids mutating environment variables. The fetcher checks the explicit key first, then falls back to the standard env var.
Module: wizard.rs → step_embeddings()
Goal: Configure semantic search for workspace memory.
Flow:
nearai OR valid session existsOPENAI_API_KEY in env OR (backend is openai AND cached key)Default model: text-embedding-3-small (for both providers)
Module: wizard.rs → step_channels(), delegating to channels.rs
Goal: Enable input channels (TUI, HTTP, Telegram, etc.).
Sub-steps:
6a. Tunnel setup (if webhook channels needed)
6b. Discover WASM channels from ~/.ironclaw/channels/
6c. Build channel options: discovered + bundled + registry catalog
6d. Multi-select: CLI/TUI, HTTP, all available channels
6e. Install missing bundled channels (copy WASM binaries)
6f. Install missing registry channels (download artifacts, fallback to source build)
6g. Initialize SecretsContext (for token storage)
6h. Setup HTTP webhook (if selected)
6i. Setup each WASM channel (secrets, owner binding)
Channel sources (priority order for installation):
~/.ironclaw/channels/channels-src/)registry/channels/*.json, download-first with source fallback)Tunnel setup (setup_tunnel):
self.settings.tunnel.public_urlWASM channel setup (setup_wasm_channel):
capabilities.json for setup.required_secretsSecretsContextTelegram special case (setup_telegram):
getMe APIgetUpdates for 120s to capture sender's user IDSecretsContext creation (init_secrets_context):
self.secrets_crypto (set in Step 2) → use if availableSECRETS_MASTER_KEY env varget_master_key() from keychain (only in channels_only mode)self.db (Arc<dyn Database>)Module: wizard.rs → step_extensions()
Goal: Install WASM tools from the extension registry.
Flow:
RegistryCatalog from registry/ directory~/.ironclaw/tools/"default" and already installed.RegistryInstaller::install_with_source_fallback() (download-first,
fallback to source build)google_oauth_token)Registry lookup (load_registry_catalog):
Searches for registry/ directory in order:
CARGO_MANIFEST_DIR (compile-time, dev builds)Module: wizard.rs → step_heartbeat()
Goal: Configure periodic background execution.
Flow:
self.settings.heartbeatSettings are persisted in two places:
Layer 1: ~/.ironclaw/.env (bootstrap vars)
Contains only the settings needed BEFORE database connection. Written by
save_bootstrap_env() in bootstrap.rs.
DATABASE_BACKEND="libsql"
LIBSQL_PATH="/Users/name/.ironclaw/ironclaw.db"
LLM_BACKEND="openai_compatible"
LLM_BASE_URL="http://my-vllm:8000/v1"
Or for PostgreSQL + NEAR AI:
DATABASE_BACKEND="postgres"
DATABASE_URL="postgres://user:pass@localhost/ironclaw"
LLM_BACKEND="nearai"
Or for Ollama:
LLM_BACKEND="ollama"
OLLAMA_BASE_URL="http://localhost:11434"
Why separate? Chicken-and-egg: you need DATABASE_BACKEND to know
which database to connect to, and LLM_BACKEND to know whether to
attempt NEAR AI session auth -- neither can be stored in the database.
Layer 2: Database settings table (everything else)
All other settings are stored as key-value pairs in the settings table,
keyed by (user_id, key). Written by set_all_settings().
Settings are serialized via Settings::to_db_map() as dotted paths:
database_backend = "libsql"
llm_backend = "nearai"
selected_model = "anthropic/claude-sonnet-4-5"
embeddings.enabled = "true"
embeddings.provider = "nearai"
channels.http_enabled = "true"
heartbeat.enabled = "true"
heartbeat.interval_secs = "300"
Settings are persisted after every successful step, not just at the end. This prevents data loss if a later step fails (e.g., the user enters an API key in step 3 but step 5 crashes — they won't need to re-enter it).
persist_after_step() is called after each step in run() and:
~/.ironclaw/.env via write_bootstrap_env()persist_settings()try_load_existing_settings() is called after Step 1 establishes a
database connection. It loads any previously saved settings from the
database using get_all_settings("default") → Settings::from_db_map()
→ merge_from(). This recovers progress from prior partial wizard runs.
Ordering after Step 1 is critical:
step_database() → sets DB fields in self.settings
let step1 = self.settings.clone() → snapshot Step 1 choices
try_load_existing_settings() → merge DB values into self.settings
self.settings.merge_from(&step1) → re-apply Step 1 (fresh wins over stale)
persist_after_step() → save merged state
This ordering ensures:
Final step of the wizard:
1. Mark onboard_completed = true
2. Call persist_settings() for final write (idempotent — ensures
onboard_completed flag is saved)
3. Call write_bootstrap_env() for final .env write (idempotent)
4. Print configuration summary
Bootstrap vars written to ~/.ironclaw/.env:
DATABASE_BACKEND (always)DATABASE_URL (if postgres)LIBSQL_PATH (if libsql)LIBSQL_URL (if turso sync)LLM_BACKEND (always, when set)LLM_BASE_URL (if openai_compatible)OLLAMA_BASE_URL (if ollama)NEARAI_API_KEY (if API key auth path)ONBOARD_COMPLETED (always, "true")Invariant: Both Layer 1 and Layer 2 must be written. If the database
write fails, the wizard returns an error and the .env file is not written.
bootstrap.rs handles one-time upgrades from older config formats:
bootstrap.json → extracts DATABASE_URL, writes .env, renames to .migratedsettings.json → migrated to database via migrate_disk_to_db()Module: settings.rs
pub struct Settings {
// Meta
pub onboard_completed: bool,
// Step 1: Database
pub database_backend: Option<String>, // "postgres" | "libsql"
pub database_url: Option<String>,
pub libsql_path: Option<String>,
pub libsql_url: Option<String>,
// Step 2: Security
pub secrets_master_key_source: KeySource, // Keychain | Env | None
// Step 3: Inference
pub llm_backend: Option<String>, // "nearai" | "anthropic" | "openai" | "ollama" | "openai_compatible" | "bedrock"
pub ollama_base_url: Option<String>,
pub openai_compatible_base_url: Option<String>,
// Step 4: Model
pub selected_model: Option<String>,
// Step 5: Embeddings
pub embeddings: EmbeddingsSettings, // enabled, provider, model
// Step 6: Channels
pub tunnel: TunnelSettings, // provider, public_url
pub channels: ChannelSettings, // http config, telegram owner, etc.
// Step 7: Heartbeat
pub heartbeat: HeartbeatSettings, // enabled, interval, notify
// Advanced (not in wizard, set via `ironclaw config set`)
pub agent: AgentSettings,
pub wasm: WasmSettings,
pub sandbox: SandboxSettings,
pub safety: SafetySettings,
pub builder: BuilderSettings,
}
KeySource enum: Keychain | Env | None
Thin wrapper for setup-time secret operations:
pub struct SecretsContext {
store: Arc<dyn SecretsStore>,
user_id: String,
}
Created by init_secrets_context() which:
SecretsCrypto from self.secrets_crypto or loads from keychain/envself.settings.database_backendSecretsContext wrapping the storeSecrets are encrypted with AES-256-GCM using the master key, then stored
in the database secrets table. The wizard writes secrets like:
telegram_bot_token → encrypted bot token
telegram_webhook_secret → encrypted webhook HMAC secret
anthropic_api_key → encrypted API key
Module: prompts.rs
| Function | Description |
|---|---|
select_one(label, options) | Numbered single-choice menu |
select_many(label, options, defaults) | Checkbox multi-select (raw terminal mode) |
input(label) | Single line text input |
optional_input(label, hint) | Text input that can be empty |
secret_input(label) | Hidden input (shows * per char), returns SecretString |
confirm(label, default) | [Y/n] or [y/N] prompt |
print_header(text) | Bold section header with underline |
print_step(n, total, text) | [1/7] Step Name |
print_success(text) | Green ✓ prefix (ANSI color), message in default color |
print_error(text) | Red ✗ prefix (ANSI color), message in default color |
print_info(text) | Blue ℹ prefix (ANSI color), message in default color |
select_many uses crossterm raw mode for arrow key navigation.
Must properly restore terminal state on all exit paths.
get_generic_password() triggers system dialogs (unlock + authorize)status, --help)"ironclaw", account: "master_key"secret-service crategnome-keyring daemon runningOn remote/VPS servers, the browser-based OAuth flow for NEAR AI may not
work because http://127.0.0.1:9876 is unreachable from the user's
local browser.
Solutions:
NEAR AI Cloud API key (option 4 in auth menu): Get an API key
from https://cloud.near.ai and paste it into the terminal. No
local listener is needed. The key is saved to ~/.ironclaw/.env
and the encrypted secrets store. Uses the OpenAI-compatible
ChatCompletions API mode.
Custom callback URL: Set IRONCLAW_OAUTH_CALLBACK_URL to a
publicly accessible URL (e.g., via SSH tunnel or reverse proxy) that
forwards to port 9876 on the server:
export IRONCLAW_OAUTH_CALLBACK_URL=https://myserver.example.com:9876
The callback_url() function in oauth_defaults.rs checks this env var
and falls back to http://127.0.0.1:{OAUTH_CALLBACK_PORT}.
# is common in URL-encoded passwords (%23 decoded).env values must be double-quoted to preserve #postgres://user:****@host/db123456:ABC-DEF...https://api.telegram.org/bot{TOKEN}/methodX-Telegram-Bot-Api-Secret-TokengetUpdates (must delete webhook first)Tests live in mod tests {} at the bottom of each file.
What to test when modifying setup:
to_db_map() then from_db_map() preserves values.env: dotenvy can parse what save_bootstrap_env() writesRun setup tests:
cargo test --lib -- setup
cargo test --lib -- bootstrap
When changing the onboarding flow:
run(), adjust total_stepsSettingsto_db_map / from_db_map serializationsave_bootstrap_env()get_master_key() twiceinit_secrets_context() respects the selected database backendcargo fmt
cargo clippy --all --benches --tests --examples --all-features -- -D warnings
cargo test --lib -- setup bootstrap
rm -rf ~/.ironclaw && cargo run