Back to Lobehub

LobeHub v1.6: GPT-4o Mini Joins the Default Lineup

docs/changelog/2024-07-19-gpt-4o-mini.mdx

2.1.561.6 KB
Original Source

LobeHub v1.6: GPT-4o Mini Joins the Default Lineup

OpenAI's full model family has moved to GPT-4. LobeHub v1.6 follows that shift, adding GPT-4o mini to the supported models. For LobeHub Cloud users, this upgrade goes further: GPT-4o mini is now the default, replacing GPT-3.5-turbo.

The result is stronger conversations from your first message, without any configuration changes.

GPT-4o Mini: Capable and Cost-Effective

GPT-4o mini brings GPT-4-level intelligence at a smaller scale. It's fast enough for real-time interactions and capable enough for most everyday tasks—drafting, analysis, coding help, and creative work.

Use GPT-4o mini when you want:

  • Better reasoning than GPT-3.5 without the latency of full GPT-4o
  • A cost-effective default for high-volume conversations
  • Strong performance on instruction following and tool use

Switch to full GPT-4o or other providers (Claude 3.5 Sonnet, Gemini 1.5 Pro) when you need maximum capability for complex reasoning tasks.

Cloud Service: Upgraded Defaults

For LobeHub Cloud users, the service upgrade is automatic. New conversations start with GPT-4o mini by default. Existing users don't need to change any settings—the model switcher simply shows the new default first.

Cloud now supports:

  • GPT-4o mini (default)
  • GPT-4o
  • Claude 3.5 Sonnet
  • Gemini 1.5 Pro

Improvements and fixes

  • Added GPT-4o mini model configuration and parameter defaults
  • Updated LobeHub Cloud default model selection logic
  • Improved model switcher UI to highlight recommended options
  • Fixed edge cases in streaming responses for newer OpenAI models