document/content/docs/faq/app.en.mdx
The Question Classification node has access to conversation context. When two consecutive questions are closely related, the model can usually classify them accurately based on their connection. For example, if a user asks "How do I use this feature?" followed by "What are the limitations?", the model leverages context to understand and respond correctly.
However, when consecutive questions have little relation to each other, classification accuracy may drop. To handle this, you can use a global variable to store the classification result. In subsequent classification steps, check the global variable first — if a result exists, reuse it; otherwise, let the model classify on its own.
Tip: Build batch test scripts to evaluate your question classification accuracy.
If a user opens a shared link and stays on the page, scheduled execution still works as expected — it takes effect after the app is published and runs in the background.
{{}} variable reference compatibility. Please switch to the / pattern for variables — {{}} syntax is deprecated." Does this only affect HTTP nodes, or all nodes?Only HTTP nodes use this syntax.
This is usually caused by not publishing correctly. Click Save and Publish in the top-right corner of the workflow editor.
Make sure you're on V4.8.17 or later, then change the suggested questions prompt to Chinese.
Edit the Knowledge Base default prompt. The built-in standard template instructs the model to use Markdown. You can remove that requirement:
Q: The app produces different results in debug mode vs. production, or when called via API.
A: This is usually caused by differences in context. Check the conversation logs, find the relevant entry, and compare the run details side by side.
| The Knowledge Base response settings require a custom prompt. Without one, the default prompt (which includes Markdown formatting instructions) is used. |
Scenario: A workflow starts with a Question Classification node that routes to different branches, each with its own Knowledge Base and AI Chat. After the first AI response, you want subsequent questions to skip classification and go straight to the Knowledge Base with chat history as context.
Solution: Add a condition check — if it's the first message (history count is 0), route through Question Classification. Otherwise, go directly to the Knowledge Base and AI Chat.
Scheduled execution doesn't support that kind of frequency. To build a real-time chatbot in WeCom, low-code workflows won't cut it — you'll need to write custom code that calls FastGPT's API Key for responses. WeCom doesn't provide an auto-listen interface for group messages (though you can trigger message pushes by @mentioning the bot). You can either send messages to the app and receive pushes via the message callback API, or poll for group messages using this API and push responses via this API.
Yes. Workflows support database connections. The database connection plugin can implement text-to-SQL, but it's risky — write operations are not recommended.
Think of it like a for loop — you pass in an array, and the loop body executes once for each element. Variables inside the loop are scoped locally to each iteration.
Add a prompt to guide the model to output formulas in LaTeX/Markdown format:
Latex inline: \(x^2\)
Latex block: $$e=mc^2$$