docs/FAQ_EN.md
This document compiles common issues encountered by users and their solutions.
Symptom: After entering US stock codes, displayed prices are clearly wrong (e.g., AMD showing 7.33 yuan), or being misidentified as A-shares.
Cause: Earlier version code matching logic prioritized A-share rules, causing code conflicts.
Solution:
.env:
YFINANCE_PRIORITY=0
Related Issue: #153
Symptom: Volume ratio data missing in analysis reports, affecting AI's judgment on volume changes.
Cause: Some default real-time quote sources (e.g., Sina interface) don't provide volume ratio field.
Solution:
REALTIME_SOURCE_PRIORITY=tencent,akshare_sina,efinance,akshare_em
Related Issue: #155
Symptom: Log shows Tushare data fetch failed: Your token is incorrect, please verify
Solution:
TUSHARE_TOKEN, system will automatically use free data sources (AkShare, Efinance)Symptom: Log shows Circuit breaker triggered or data returns None
Cause: Free data sources (Eastmoney, Sina, etc.) have anti-scraping mechanisms, high-frequency requests get rate-limited.
Solution:
Symptom: Actions log shows GEMINI_API_KEY or STOCK_LIST undefined
Cause: GitHub distinguishes Secrets (encrypted) and Variables (regular variables), wrong configuration location causes read failure.
Solution:
Settings → Secrets and variables → ActionsNew repository secret): Store sensitive information
GEMINI_API_KEYOPENAI_API_KEYTELEGRAM_BOT_TOKENVariables tab): Store non-sensitive configuration
STOCK_LISTGEMINI_MODELREPORT_TYPESolution:
.env file is in project root directorySTOCK_LIST, SCHEDULE_ENABLED, SCHEDULE_TIME, SCHEDULE_RUN_IMMEDIATELY, and RUN_IMMEDIATELY back into the container's .env.env; for example, scheduled runs keep hot-reading the saved STOCK_LISTdocker run -e ... or Compose environment:), those explicit process env overrides still win on later restarts; update or remove them if you want the WebUI-saved .env values to take overSCHEDULE_* and RUN_IMMEDIATELY are still startup-time scheduling settings: saving them does not immediately trigger an analysis run and does not hot-rebuild the scheduler inside the current process.env edits in Docker: Restart the container after changes
docker-compose down && docker-compose up -d
.env file doesn't work, must configure in Secrets/Variables.env files (e.g., .env.local) causing overrideSolution:
Configure in .env:
USE_PROXY=true
PROXY_HOST=127.0.0.1
PROXY_PORT=10809
Note: Proxy configuration only works for local runs, GitHub Actions environment doesn't need proxy.
Full details: LLM Config Guide.
Q: Configured both GEMINI_API_KEY and LLM_CHANNELS, why does it only use channels?
The system uses exactly one mode by priority: advanced YAML routing (LITELLM_CONFIG) > LLM_CHANNELS > legacy keys. However, YAML routing only takes effect when the file can be parsed successfully and yields a non-empty model_list; if the YAML path is invalid or the content is empty, the system automatically falls back to LLM_CHANNELS or legacy keys. Once a tier is active, lower-priority tiers are not used.
Q: test_env says no usable AI model is configured, what should I do?
Start with one provider and its API key. If you want to pin a primary model, add LITELLM_MODEL=provider/model. If you need multi-model switching, configure LLM_CHANNELS or advanced YAML routing. Run python test_env.py --config to validate config and python test_env.py --llm to actually call the API.
Q: How to use multiple models at once (e.g. AIHubmix + DeepSeek + Gemini)?
Use channel mode: set LLM_CHANNELS=aihubmix,deepseek,gemini and configure each channel's LLM_{NAME}_BASE_URL, LLM_{NAME}_API_KEY, LLM_{NAME}_MODELS. You can also configure this visually in Web Settings → AI Model → AI Model Access.
Q: The ask-stock / Agent page says no usable LLM is configured, but I only use legacy GEMINI_* / OPENAI_* / ANTHROPIC_* settings. What should I check?
First confirm whether LITELLM_CONFIG or LLM_CHANNELS is active, because either of those tiers overrides legacy keys. If neither tier is active and AGENT_LITELLM_MODEL is empty, the ask-stock Agent still inherits legacy provider models automatically: GEMINI_MODEL, OPENAI_MODEL, and ANTHROPIC_MODEL are mapped to LiteLLM provider-prefixed model names for the corresponding runtime. This fix does not silently migrate or clear old settings; it only returns the real backend reason to the frontend so you can see whether the issue is a missing key, a missing model name, or an upper-tier config taking precedence. Full compatibility details are documented in the LLM Config Guide under “Ask-Stock Agent / LiteLLM compatibility notes”.
Symptom: Analysis succeeded but no notification received, log shows 400 error or Message too long
Cause: Different platforms have different message length limits:
Solution:
SINGLE_STOCK_NOTIFY=true, push immediately after each stock analysisREPORT_TYPE=simple for simplified formatSolution:
TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID are configuredhttps://api.telegram.org/bot<TOKEN>/getUpdateschat.id in the returned JSONSolution:
WECHAT_MSG_TYPE=text
Symptom: Log shows Resource has been exhausted or 429 Too Many Requests
Solution:
GEMINI_REQUEST_DELAY=5
ANALYSIS_DELAY=10
Configuration method:
# No need to configure GEMINI_API_KEY
OPENAI_API_KEY=sk-xxxxxxxx
OPENAI_BASE_URL=https://api.deepseek.com
OPENAI_MODEL=deepseek-v4-flash
# deepseek-chat / deepseek-reasoner remain compatible, but DeepSeek marks them deprecated after 2026/07/24
Supported model services:
https://api.deepseek.comhttps://dashscope.aliyuncs.com/compatible-mode/v1https://api.moonshot.cn/v1Configuration: Use OLLAMA_API_BASE + LITELLM_MODEL, or channel mode (LLM_CHANNELS=ollama + LLM_OLLAMA_BASE_URL + LLM_OLLAMA_MODELS).
Pitfall: Do not use OPENAI_BASE_URL for Ollama, or the system will concatenate URLs incorrectly (e.g. 404, api/generate/api/show). See LLM Config Guide Example 4 and channel examples.
OllamaException / APIConnectionError (All LLM models failed)?Symptom: Log shows litellm.APIConnectionError: OllamaException or Analysis failed: All LLM models failed (tried 1 model(s)).
Work through the following 5 checkpoints in order:
Is the Ollama service running?
# Check process
pgrep -a ollama
# If no output, start it first
ollama serve
Verify it is listening: curl http://localhost:11434 should return Ollama is running.
Is OLLAMA_API_BASE set correctly?
OLLAMA_API_BASE=http://localhost:11434OPENAI_BASE_URL causes the URL path to be mangled (e.g. …/api/generate/api/show).Does the model name include the ollama/ prefix?
LITELLM_MODEL=ollama/qwen3:8bLITELLM_MODEL=qwen3:8b (missing prefix — litellm cannot route to Ollama)Has the model been pulled locally?
ollama list # list downloaded models
ollama pull qwen3:8b # pull if missing
Network / firewall for remote or Docker deployments
OLLAMA_API_BASE to its actual IP, e.g. http://192.168.1.100:11434.OLLAMA_HOST=0.0.0.0:11434).See LLM Config Guide → Example 4 (Ollama) for a complete configuration example.
Solution:
docker logs <container_id>
.env file format error (e.g., extra spaces)Solution:
--host 0.0.0.0 (cannot be 127.0.0.1) ports:
- "8000:8000"
Short answer: For Docker users, the authoritative version is the image tag you actually deployed, not a hardcoded constant in a Python source file.
Why:
.github/workflows/docker-publish.yml, which only publishes release images for Git tags matching v*.*.* (for example, v3.12.0).main.py, server.py, or another backend module.version field in apps/dsa-web/package.json is currently a placeholder 0.0.0. The WebUI version/build card is useful for checking whether frontend assets were rebuilt, but it is not the Docker release version.apps/dsa-desktop/package.json, and that only applies to the Electron desktop build, not the Docker image.How to check your current Docker version:
ghcr.io/zhulinsen/daily_stock_analysis:v3.12.0, the deployed version is v3.12.0.latest, check your original docker pull, docker-compose.yml, or deployment script, then compare with GitHub Releases.Build ID / Build Time; that confirms static asset freshness, not the Docker release version.Recommendation: To avoid repeated updates, prefer a pinned version tag such as v3.12.0 instead of relying on latest.
Method:
# Local run
python main.py --market-only
# GitHub Actions
# Select mode: market-only when manually triggering
Cause: Earlier versions used regex matching for statistics, may not match actual recommendations.
Solution: Fixed in latest version, AI model now directly outputs decision_type field for accurate statistics.
If the above content doesn't solve your issue, welcome to:
Last updated: 2026-04-20