docs/guides/troubleshooting.md
This page addresses frequently asked questions (FAQ) and provides troubleshooting steps for common issues encountered while using Agent Zero.
1. How do I ask Agent Zero to work directly on my files or dirs?
/a0/usr. Agent Zero will be able to perform tasks on them.2. When I input something in the chat, nothing happens. What's wrong?
3. I get “Invalid model ID.” What does that mean?
openai/gpt-5.3 is correct for OpenRouter, but incorrect for the native OpenAI provider, which goes without prefix.4. Does ChatGPT Plus include API access?
5. Where is chat history stored?
/a0/usr/chats/ inside the container.6. How do I integrate open-source models with Agent Zero? Refer to the Choosing your LLMs section for configuring local models (Ollama, LM Studio, etc.).
[!TIP] Some LLM providers offer free usage tiers, for example Groq, Mistral, SambaNova, or CometAPI.
7. How can I make Agent Zero retain memory between sessions?
Use Settings → Backup & Restore and avoid mapping the entire /a0 directory. See How to update Agent Zero.
8. My browser tool fails or says Playwright is missing. What now?
The built-in browser is provided by the _browser plugin and the direct browser tool. Docker: the Chromium headless shell is shipped preinstalled (typically under /a0/tmp/playwright). Local development: if the binary is missing, ensure_playwright_binary() in plugins/_browser/helpers/playwright.py runs playwright install chromium --only-shell into tmp/playwright on first browser use (you may see UI notifications). To install ahead of time, run PLAYWRIGHT_BROWSERS_PATH=tmp/playwright playwright install chromium --only-shell after pip install -r requirements.txt. If you prefer an external browser stack, use MCP alternatives such as Browser OS, Chrome DevTools, or Playwright MCP. See MCP Setup.
9. My secrets disappeared after a backup restore.
Secrets are stored in /a0/usr/secrets.env and are not always included in backup archives. Copy them manually.
10. Where can I find more documentation or tutorials?
11. How do I adjust API rate limits?
Use the model rate limit fields in Settings (Main Model and Utility Model sections) to set request/input/output limits. These map to the model config limits (for example limit_requests, limit_input, limit_output).
12. My code_execution_tool doesn't work, what's wrong?
13. Can Agent Zero interact with external APIs or services (e.g., WhatsApp)? Yes, by creating custom tools or using MCP servers. See Extensions and MCP Setup.
Installation
80. If you used 0:80, check the assigned port in Docker Desktop.Usage
From the host, find the container name:
docker ps
Open a shell inside the container:
docker exec -it <container> /bin/bash
Queue an update for the next startup attempt with the recovery script in /exe:
/exe/trigger_self_update.sh
That default command writes /exe/a0-self-update.yaml with main and latest, so the next startup tries the newest release in the current installed major version. You can also specify the branch, version, and backup settings:
/exe/trigger_self_update.sh ready latest
/exe/trigger_self_update.sh main v1.10 --backup-dir /root/update-backups --backup-name usr-recovery.zip
/exe/trigger_self_update.sh development latest --no-backup
You can run the same commands directly from the host without opening a shell:
docker exec -it <container> /exe/trigger_self_update.sh
docker exec -it <container> /exe/trigger_self_update.sh ready latest
docker exec -it <container> tail -n 200 /exe/a0-self-update.log
docker exec -it <container> cat /exe/a0-self-update-status.yaml
The recovery command only schedules the update. Restart the container or let Agent Zero start again, then check /exe/a0-self-update.log and /exe/a0-self-update-status.yaml to see what happened.
Error Messages: Pay close attention to the error messages displayed in the Web UI or terminal. They often provide valuable clues for diagnosing the issue. Refer to the specific error message in online searches or community forums for potential solutions.
Performance Issues: If Agent Zero is slow or unresponsive, it might be due to resource limitations, network latency, or the complexity of your prompts and tasks, especially when using local models.