docs/explanation/advanced-elicitation.md
Make the LLM reconsider what it just generated. You pick a reasoning method, it applies that method to its own output, you decide whether to keep the improvements.
A structured second pass. Instead of asking the AI to "try again" or "make it better," you select a specific reasoning method and the AI re-examines its own output through that lens.
The difference matters. Vague requests produce vague revisions. A named method forces a particular angle of attack, surfacing insights that a generic retry would miss.
Workflows offer advanced elicitation at decision points - after the LLM has generated something, you'll be asked if you want to run it.
Dozens of reasoning methods are available. A few examples:
And many more. The AI picks the most relevant options for your content - you choose which to run.
:::tip[Start Here] Pre-mortem Analysis is a good first pick for any spec or plan. It consistently finds gaps that a standard review misses. :::