docs/changelog/2024-07-19-gpt-4o-mini.mdx
OpenAI's full model family has moved to GPT-4. LobeHub v1.6 follows that shift, adding GPT-4o mini to the supported models. For LobeHub Cloud users, this upgrade goes further: GPT-4o mini is now the default, replacing GPT-3.5-turbo.
The result is stronger conversations from your first message, without any configuration changes.
GPT-4o mini brings GPT-4-level intelligence at a smaller scale. It's fast enough for real-time interactions and capable enough for most everyday tasks—drafting, analysis, coding help, and creative work.
Use GPT-4o mini when you want:
Switch to full GPT-4o or other providers (Claude 3.5 Sonnet, Gemini 1.5 Pro) when you need maximum capability for complex reasoning tasks.
For LobeHub Cloud users, the service upgrade is automatic. New conversations start with GPT-4o mini by default. Existing users don't need to change any settings—the model switcher simply shows the new default first.
Cloud now supports: