docs/provider-config/google-gemini.mdx
Google Gemini is Google's family of multimodal AI models, offering some of the largest context windows available and strong performance across coding, reasoning, and document analysis tasks.
Website: https://ai.google.dev/
Cline supports the following Google Gemini models:
gemini-3.1-pro-preview (Default) - Latest pro model with 1M context, thinking support, and tiered pricing ($2.00-$4.00/M input)gemini-3-flash-preview - Fast model with 1M context and thinking level support ($0.30-$0.50/M input)gemini-2.5-pro - High-performance model with 1M context and thinking budget ($1.25-$2.50/M input)gemini-2.5-flash - Fast and affordable with 1M context and thinking support ($0.30/M input)gemini-2.5-flash-lite-preview-06-17 - Ultra-affordable lite variant ($0.10/M input)gemini-2.0-flash-001 - Fast model with 1M context and prompt caching ($0.10/M input)gemini-2.0-flash-lite-preview-02-05 - Lite variant (free during preview)gemini-2.0-pro-exp-02-05 - Pro experimental with 2M context (free during preview)gemini-2.0-flash-thinking-exp-01-21 - Thinking experimental with 1M context (free)gemini-2.0-flash-thinking-exp-1219 - Earlier thinking experimental (free)gemini-2.0-flash-exp - Flash experimental with 1M context (free)gemini-1.5-flash-002 - Fast model with tiered pricing and prompt cachinggemini-1.5-flash-exp-0827 - Flash experimental (free)gemini-1.5-flash-8b-exp-0827 - Compact 8B flash (free)gemini-1.5-pro-002 - Pro model with 2M contextgemini-1.5-pro-exp-0827 - Pro experimental (free)gemini-exp-1206 - Experimental with 2M context (free)Gemini 3 and 2.5 models support thinking/reasoning capabilities:
low, high) that control reasoning depthEnable extended thinking in Cline settings to leverage these capabilities for complex coding and reasoning tasks.