docs/troubleshooting.md
Common issues, diagnostics, and answers to frequently asked questions about OpenFang.
Run the built-in diagnostic tool:
openfang doctor
This checks:
openfang status
curl http://127.0.0.1:4200/api/health
curl http://127.0.0.1:4200/api/health/detail # Requires auth
OpenFang uses tracing for structured logging. Set the log level via environment:
RUST_LOG=info openfang start # Default
RUST_LOG=debug openfang start # Verbose
RUST_LOG=openfang=debug openfang start # Only OpenFang debug, deps at info
cargo install fails with compilation errorsCause: Rust toolchain too old or missing system dependencies.
Fix:
rustup update stable
rustup default stable
rustc --version # Need 1.75+
On Linux, you may also need:
# Debian/Ubuntu
sudo apt install pkg-config libssl-dev libsqlite3-dev
# Fedora
sudo dnf install openssl-devel sqlite-devel
openfang command not found after installFix: Ensure ~/.cargo/bin is in your PATH:
export PATH="$HOME/.cargo/bin:$PATH"
# Add to ~/.bashrc or ~/.zshrc to persist
Cause: Older OpenFang installers (<v0.6.4) appended a PATH line directly to ~/.config/fish/config.fish. On Arch derivatives like CachyOS, the desktop session can source fish on login — a malformed or invalid PATH line then prevents the session from finishing, leaving you on a black screen.
Fix: Boot to a TTY (Ctrl+Alt+F2) and remove any OpenFang PATH lines from config.fish:
sed -i '/openfang/d' ~/.config/fish/config.fish
Then re-run the installer — current versions write to ~/.config/fish/conf.d/openfang.fish (a drop-in directory) instead, and guard the path with test -d so a missing install dir can never wedge fish startup.
To remove OpenFang's PATH entry cleanly:
rm ~/.config/fish/conf.d/openfang.fish
Common causes:
docker run -e GROQ_API_KEY=... ghcr.io/RightNow-AI/openfang-p 3001:4200Fix: Run openfang init to create the default config:
openfang init
This creates ~/.openfang/config.toml with sensible defaults.
Cause: No LLM provider API key found in environment.
Fix: Set at least one provider key:
export GROQ_API_KEY="gsk_..." # Groq (free tier available)
# OR
export ANTHROPIC_API_KEY="sk-ant-..."
# OR
export OPENAI_API_KEY="sk-..."
Add to your shell profile to persist across sessions.
Run validation manually:
openfang config show
Common issues:
Fix: Change the port in config or kill the existing process:
# Change API port
# In config.toml:
# [api]
# listen_addr = "127.0.0.1:3001"
# Or find and kill the process using the port
# Linux/macOS:
lsof -i :4200
# Windows:
netstat -aon | findstr :4200
Causes:
Fix: Verify your key:
# Check if the env var is set
echo $GROQ_API_KEY
# Test the provider
curl http://127.0.0.1:4200/api/providers/groq/test -X POST
Cause: Too many requests to the LLM provider.
Fix:
max_llm_tokens_per_hour in agent capabilitiesPossible causes:
/compact to shrink session)Fix: Use per-agent model overrides to use faster models for simple agents:
[model]
provider = "groq"
model = "llama-3.1-8b-instant" # Fast, small model
Fix: Check available models:
curl http://127.0.0.1:4200/api/models
Or use an alias:
[model]
model = "llama" # Alias for llama-3.3-70b-versatile
See the full alias list:
curl http://127.0.0.1:4200/api/models/aliases
Fix: Ensure the local server is running:
# Ollama
ollama serve # Default: http://localhost:11434
# vLLM
python -m vllm.entrypoints.openai.api_server --model ...
# LM Studio
# Start from the LM Studio UI, enable API server
Checklist:
echo $TELEGRAM_BOT_TOKEN/start in Telegram)allowed_users is set, your Telegram user ID is in the listChecklist:
Checklist:
SLACK_BOT_TOKEN (xoxb-) and SLACK_APP_TOKEN (xapp-) are setchat:write, app_mentions:read, im:history, im:read, im:writeChecklist:
Common causes:
Check logs for the specific error:
RUST_LOG=openfang_channels=debug openfang start
Cause: The agent is repeatedly calling the same tool with the same parameters.
Automatic protection: OpenFang has a built-in loop guard:
Manual fix: Cancel the agent's current run:
curl -X POST http://127.0.0.1:4200/api/agents/{id}/stop
Or via chat command: /stop
Cause: Conversation history is too long for the model's context window.
Fix: Compact the session:
curl -X POST http://127.0.0.1:4200/api/agents/{id}/session/compact
Or via chat command: /compact
Auto-compaction is enabled by default when the session reaches the threshold (configurable in [compaction]).
Cause: Tools not granted in the agent's capabilities.
Fix: Check the agent's manifest:
[capabilities]
tools = ["file_read", "web_fetch", "shell_exec"] # Must list each tool
# OR
# tools = ["*"] # Grant all tools (use with caution)
Cause: The agent is trying to use a tool or access a resource not in its capabilities.
Fix: Add the required capability to the agent manifest. Common ones:
tools = [...] for tool accessnetwork = ["*"] for network accessmemory_write = ["self.*"] for memory writesshell = ["*"] for shell commands (use with caution)Check:
openfang agent spawn --dry-run manifest.tomlCause: API key required but not provided.
Fix: Include the Bearer token:
curl -H "Authorization: Bearer your-api-key" http://127.0.0.1:4200/api/agents
Cause: GCRA rate limiter triggered.
Fix: Wait for the Retry-After period, or increase rate limits in config:
[api]
rate_limit_per_second = 20 # Increase if needed
Cause: Trying to access API from a different origin.
Fix: Add your origin to CORS config:
[api]
cors_origins = ["http://localhost:5173", "https://your-app.com"]
Possible causes:
Client-side fix: Implement reconnection logic with exponential backoff.
Checklist:
POST /v1/chat/completions (not /api/agents/{id}/message)openfang:agent-name (e.g., openfang:coder)"stream": true for SSE responsesimage_url with data:image/png;base64,... formatChecklist:
~/.openfang/daemon.json and restartingCause: The embedded API server hasn't started yet.
Fix: Wait a few seconds. If persistent, check logs for server startup errors.
Platform-specific:
libappindicator on GNOME)Tips:
DELETE /api/sessions/{id}Normal startup: <200ms for the kernel, ~1-2s with channel adapters.
If slower:
~/.openfang/data/openfang.db)Possible causes:
Edit ~/.openfang/config.toml:
[default_model]
provider = "groq"
model = "llama-3.3-70b-versatile"
api_key_env = "GROQ_API_KEY"
Yes. Each agent can use a different provider via its manifest [model] section. The kernel creates a dedicated driver per unique provider configuration.
~/.openfang/config.toml under [channels]# From source
cd openfang && git pull && cargo install --path crates/openfang-cli
# Docker
docker pull ghcr.io/RightNow-AI/openfang:latest
Yes. Agents can use the agent_send, agent_spawn, agent_find, and agent_list tools to communicate. The orchestrator template is specifically designed for multi-agent delegation.
Only LLM API calls go to the provider's servers. All agent data, memory, sessions, and configuration are stored locally in SQLite (~/.openfang/data/openfang.db). The OFP wire protocol uses HMAC-SHA256 mutual authentication for P2P communication.
Back up these files:
~/.openfang/config.toml (configuration)~/.openfang/data/openfang.db (all agent data, memory, sessions)~/.openfang/skills/ (installed skills)rm -rf ~/.openfang
openfang init # Start fresh
Yes, if you use a local LLM provider:
ollama serve + ollama pull llama3.2Set the provider in config:
[default_model]
provider = "ollama"
model = "llama3.2"
| Aspect | OpenFang | OpenClaw |
|---|---|---|
| Language | Rust | Python |
| Channels | 40 | 38 |
| Skills | 60 | 57 |
| Providers | 20 | 3 |
| Security | 16 systems | Config-based |
| Binary size | ~30 MB | ~200 MB |
| Startup | <200 ms | ~3 s |
OpenFang can import OpenClaw configs: openfang migrate --from openclaw
| Resource | Minimum | Recommended |
|---|---|---|
| RAM | 128 MB | 512 MB |
| Disk | 50 MB (binary) | 500 MB (with data) |
| CPU | Any x86_64/ARM64 | 2+ cores |
| OS | Linux, macOS, Windows | Any |
| Rust | 1.75+ (build only) | Latest stable |
RUST_LOG=openfang_runtime=debug,openfang_channels=info openfang start
Yes. Each crate is independently usable:
[dependencies]
openfang-runtime = { path = "crates/openfang-runtime" }
openfang-memory = { path = "crates/openfang-memory" }
The openfang-kernel crate assembles everything, but you can use individual crates for custom integrations.
Re-run the install script to get the latest release:
curl -fsSL https://openfang.sh/install | sh
Or build from source:
git pull origin main
cargo build --release -p openfang-cli
docker run -d --name openfang \
-e GROQ_API_KEY=your_key_here \
-p 4200:4200 \
ghcr.io/rightnow-ai/openfang:latest
OpenFang has built-in dashboard authentication. Enable it in ~/.openfang/config.toml:
[auth]
enabled = true
username = "admin"
password_hash = "$argon2id$..." # see below
Generate the password hash:
openfang auth hash-password
Paste the output into the password_hash field and restart the daemon.
For public-facing deployments, you should also place a reverse proxy (Caddy, nginx) in front for TLS termination.
In ~/.openfang/config.toml:
[memory]
embedding_provider = "openai" # or "ollama", "gemini"
embedding_model = "text-embedding-3-small"
embedding_api_key_env = "OPENAI_API_KEY"
For local Ollama embeddings:
[memory]
embedding_provider = "ollama"
embedding_model = "nomic-embed-text"
Add allowed_senders to your email config:
[channels.email]
allowed_senders = ["[email protected]", "[email protected]"]
Empty list = responds to everyone. Always set this to avoid auto-replying to spam.
[default_model]
provider = "zai"
model = "glm-5-20250605"
api_key_env = "ZHIPU_API_KEY"
Kimi models are built-in. Use alias kimi or the full model ID:
[default_model]
provider = "moonshot"
model = "kimi-k2.5"
api_key_env = "MOONSHOT_API_KEY"
Not yet — each channel type currently supports one bot. Multi-bot routing is tracked as a feature request (#586). As a workaround, run multiple OpenFang instances on different ports with different configs.
Add to ~/.openfang/config.toml:
[claude_code]
skip_permissions = true
Then restart the daemon.
The trader hand needs shell access for executing trading scripts. In your agent's agent.toml:
[capabilities]
shell = ["python *", "node *"]
OpenRouter free models have strict rate limits and may return empty responses. Use a paid model or try a different free provider like Groq (GROQ_API_KEY).