wren-ai-service/docs/config_examples/README.md
Since these config files are examples, so please carefully read the file and comments inside. Try to understand the purpose of each section and parameter, don't simply copy and paste the content of these config files into your own config file. It will not work. For more detailed information to the configurations, please read this file.
We also definitely welcome your contribution to add config files for other LLM providers.
The config.qwen3.yaml file provides an example configuration for using Qwen3 models with their unique thinking and non-thinking capabilities. Qwen3 models support two modes:
/think in your prompts to enable step-by-step reasoningtemperature=0.6, top_p=0.95, top_k=20qwen3-thinking alias in the pipeline configuration/no_think in your prompts for direct, fast responsestemperature=0.7, top_p=0.8, top_k=20qwen3-fast alias in the pipeline configurationqwen/qwen3-30b-a3b: 30B parameter MoE model (3.3B activated)qwen/qwen3-32b: 32B parameter dense modelqwen/qwen3-8b: 8B parameter dense modelqwen/qwen3-14b: 14B parameter dense model# Enable thinking for complex reasoning
"Explain the mathematical proof for the Pythagorean theorem /think"
# Use fast mode for simple queries
"What is the capital of France? /no_think"
Note: You need to set OPENROUTER_API_KEY in your ~/.wrenai/.env file to use OpenRouter as the provider for Qwen3 models.