docs/provider-config/xai-grok.mdx
xAI is the company behind Grok, a large language model known for its conversational abilities and large context window. Grok models are designed to provide helpful, informative, and contextually relevant responses.
Website: https://x.ai/
Cline supports the following xAI Grok models:
grok-4 (Default) - xAI's flagship model with 262K context, prompt caching, and image supportgrok-4-fast-reasoning - Fast reasoning variant with 2M context windowgrok-4-1-fast-reasoning - Grok 4.1 fast reasoning with 2M contextgrok-4-1-fast-non-reasoning - Grok 4.1 fast non-reasoning with 2M context and image supportgrok-code-fast-1 - Specialized coding model with 256K contextgrok-3 - Grok-3 model with 131K context windowgrok-3-fast - Grok-3 fast model with 131K context windowgrok-3-mini - Grok-3 mini model with 131K context windowgrok-3-mini-fast - Grok-3 mini fast model with 131K context windowgrok-3-beta - Grok-3 beta model with 131K context windowgrok-3-fast-beta - Grok-3 fast beta model with 131K context windowgrok-3-mini-beta - Grok-3 mini beta model with 131K context windowgrok-3-mini-fast-beta - Grok-3 mini fast beta model with 131K context windowgrok-2-latest - Grok-2 model - latest version with 131K context windowgrok-2 - Grok-2 model with 131K context windowgrok-2-1212 - Grok-2 model (version 1212) with 131K context windowgrok-2-vision-latest - Grok-2 Vision model - latest version with image support and 32K context windowgrok-2-vision - Grok-2 Vision model with image support and 32K context windowgrok-2-vision-1212 - Grok-2 Vision model (version 1212) with image support and 32K context windowgrok-vision-beta - Grok Vision Beta model with image support and 8K context windowgrok-beta - Grok Beta model (legacy) with 131K context windowGrok 3 Mini models feature specialized reasoning capabilities, allowing them to "think before responding" - particularly useful for complex problem-solving tasks.
Reasoning is only supported by:
grok-3-mini-betagrok-3-mini-fast-betaThe Grok 3 models grok-3-beta and grok-3-fast-beta do not support reasoning.
When using reasoning-enabled models, you can control how hard the model thinks with the reasoning_effort parameter:
low: Minimal thinking time, using fewer tokens for quick responseshigh: Maximum thinking time, leveraging more tokens for complex problemsChoose low for simple queries that should complete quickly, and high for harder problems where response latency is less important.
reasoning_content field in the response completion objectgrok-2-vision-latest, grok-2-vision, etc.) when you need to process or analyze images.