docs/usage/agent/chain-of-thought.mdx
LobeHub's Chain of Thought (CoT) visualization provides transparency into how AI models reason through complex problems, letting you watch the thinking process unfold in real time before the final answer arrives.
Chain of Thought is a reasoning technique where AI models break down complex problems into clear, logical steps before producing a final answer. LobeHub makes this internal reasoning visible so you can:
When a model uses Chain of Thought reasoning:
All of this happens in real time, displayed in an expandable section above the final answer.
When a model is reasoning, you'll see:
**Best for**: Quick responses where you trust the reasoning, or when you want to keep the interface clean.
- Complete step-by-step reasoning
- Intermediate conclusions
- Logical connections between steps
- Full thought process from start to finish
**Best for**: Validating complex reasoning, learning how problems are approached, debugging incorrect answers.
CoT reasoning is most beneficial — and most commonly used by models — for:
<AccordionGroup> <Accordion title="Mathematical Problems"> Multi-step calculations where the model sets up equations, performs intermediate calculations, and shows work at each step.Example: "Calculate the compound interest on $10,000 at 5% annual rate for 3 years, compounded quarterly."
When reviewing Chain of Thought output, look for:
Logical progression — Do steps build on each other? Are there clear connections between ideas? Does the reasoning flow naturally?
Completeness — Are all aspects of the question addressed? Were edge cases considered? Is anything overlooked?
Validity — Are assumptions reasonable? Is the logic sound? Do conclusions follow from the premises?
Follow the reasoning from start to finish:
If a reasoning step seems incorrect or incomplete, continue the conversation: "That step seems wrong because..." or "Can you reconsider step 3?" The model can revise its reasoning based on your feedback.
<Callout type={'info'}> Not all models use Chain of Thought reasoning, and not all responses trigger it even on models that support it. CoT is most common with complex problems that benefit from step-by-step analysis. Supported models include o1, o3, Claude 3.7 Sonnet, Gemini 2.0 Flash Thinking, and others. </Callout>
<Cards> <Card href={'/docs/usage/getting-started/agent'} title={'Agent'} /><Card href={'/docs/usage/community/mcp-market'} title={'MCP Marketplace'} /> </Cards>