docs/changelog/2024-02-14-ollama.mdx
Cloud models are powerful, but sometimes you need data to stay local. Maybe it's a sensitive project. Maybe you want to experiment without API costs. Maybe you just like the idea of owning the entire stack. LobeHub v0.127.0 now supports Ollama, giving you the same chat experience whether your model lives in the cloud or on your machine.
No separate interface to learn. No workflow fragmentation. Just point LobeHub at your local Ollama instance and start chatting.
Getting started is straightforward. If you already have Ollama running, connect LobeHub with a single Docker command:
docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434/v1 lobehub/lobe-chat
That's it. LobeHub detects your local models and makes them available in the same model switcher you use for GPT-4, Claude, and others. Mix cloud and local models in the same workspace depending on what each conversation needs.
Huge thanks to the community contributor who made Ollama integration possible, and to the Ollama team for building accessible local AI infrastructure.