apps/docs/docs/update-notes/v3.36.mdx
Roo Code 3.36 introduces non-destructive context management, new debugging and UI controls, and a steady stream of reliability fixes and provider improvements.
Context condensing and sliding window truncation now preserve your original messages internally rather than deleting them (#9665). When you rewind to an earlier checkpoint, the full conversation history is restored automatically.
Roo Code supports GPT-5.1 Codex Max, OpenAI’s long-horizon coding model, including model defaults for gpt-5.1 / gpt-5 / gpt-5-mini variants (#9848).
The browser tool can now save screenshots to a specified file path with a new screenshot action, so you can capture visual state during browser automation tasks (#9963).
If you use gpt-5.1-codex-max with the OpenAI provider, you can now select an “Extra High” reasoning effort level for maximum reasoning depth on complex tasks (#9900).
OpenRouter models that support native tools now use native tool calling by default, improving tool calling reliability without manual configuration (#9878).
Hover over error rows to reveal an info icon that opens a modal with full error details and a copy button (#9985).
GPT-5.2 is available in the OpenAI provider and set as the default model (#10024).
You can now configure how Enter behaves in the chat input so it better fits multiline prompts and different input methods (#10002).
The gemini-3-flash-preview model is now available in the Roo Code Cloud provider, Google Gemini, GCP Vertex AI, Requesty, and OpenRouter providers. It’s the latest model from Google, released this morning (thanks contributors!) (#10151).
The DeepSeek provider's deepseek-reasoner model now supports "interleaved thinking" and native tool calling. In our internal evals, tool calling succeeded 100% of the time, and the extended-run score improved to 93.4% (thanks zbww_!) (#9969, #10141).
Models that support native tool calling now default to using native protocol instead of XML. The XML protocol is still available in provider settings (#10186).
When you use Claude Sonnet 4.5 on Vertex AI, you can enable a 1M context window option for supported models (#10209).
Chat error states now make it easier to understand what went wrong and to share the right details when filing a bug report:
write_to_file rejected complete markdown files containing inline code comments like # NEW: or // Step 1: (#9787)mcp--server--tool ID format (#10054)additionalProperties: false (#10109)awslabs.aws-documentation-mcp-server) with Amazon Bedrock (#10152)The toolConfig field must be defined errors (#10155)google/gemini-3-flash) wouldn't appear immediately after logging into Roo Code Cloud (#10156)format: "uri" in their tool schemas would fail with OpenAI providers (#10198)additionalProperties: false on object schemas (#10210)~15.2.8 for improved compatibility with upstream fixes (#10140)insert_content tool (use apply_diff or write_to_file) (#9751)list_code_definition_names tool (#10005)apply_patch and avoiding unsupported file-writing tools (#10082)