docs/usage/providers/ollama.mdx
<Image alt={'Using Ollama in LobeHub'} borderless cover src={'/blog/assets17870709/f579b39b-e771-402c-a1d1-620e57a10c75.webp'} />
Ollama is a powerful framework for running large language models (LLMs) locally. It supports a variety of models, including Llama 2, Mistral, and more. LobeHub now integrates seamlessly with Ollama, allowing you to leverage these models directly within your chat interface.
This guide will walk you through how to use Ollama in LobeHub:
<Video alt={'Full demo of using Ollama in LobeHub'} height={580} src="/blog/assets28616219/c32b56db-c6a1-4876-9bc3-acbd37ec0c0c.mp4" />
Download Ollama for macOS, then unzip and install it.
By default, Ollama only allows local access. To enable cross-origin access and port listening, set the OLLAMA_ORIGINS environment variable using launchctl:
launchctl setenv OLLAMA_ORIGINS "*"
After setting the variable, restart the Ollama application.
You can now start chatting with local LLMs in LobeHub.
<Image alt="Chatting with llama3 in LobeHub" height="573" src="/blog/assets28616219/7f9a9a9f-fd91-4f59-aac9-3f26c6d49a1e.webp" /> </Steps>Download Ollama for Windows and install it.
By default, Ollama only allows local access. To enable cross-origin access and port listening, set the OLLAMA_ORIGINS environment variable.
On Windows, Ollama inherits your user and system environment variables:
OLLAMA_ORIGINS variable for your user account and set its value to *.OK/Apply and restart your system.You can now start chatting with local LLMs in LobeHub. </Steps>
Run the following command to install:
curl -fsSL https://ollama.com/install.sh | sh
Alternatively, refer to the manual installation guide for Linux.
By default, Ollama only allows local access. To enable cross-origin access and port listening, set the OLLAMA_ORIGINS environment variable. If Ollama is running as a systemd service, use systemctl to configure it:
sudo systemctl edit ollama.service
[Service] section:[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_ORIGINS=*"
sudo systemctl daemon-reload
sudo systemctl restart ollama
You can now start chatting with local LLMs in LobeHub. </Steps>
If you prefer using Docker, Ollama provides an official image. Pull it with:
docker pull ollama/ollama
By default, Ollama only allows local access. To enable cross-origin access and port listening, set the OLLAMA_ORIGINS environment variable in your docker run command:
docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
You can now start chatting with local LLMs in LobeHub. </Steps>
Ollama supports a wide range of models. You can browse the available models in the Ollama Library and choose the ones that best suit your needs.
LobeHub comes pre-configured with popular LLMs like llama3, Gemma, and Mistral. When you select a model for the first time, LobeHub will prompt you to download it.
<Image alt="LobeHub prompts to install Ollama model" height="460" src="/blog/assets28616219/4e81decc-776c-43b8-9a54-dfb43e9f601a.webp" />Once the download is complete, you can start chatting.
Alternatively, you can install models directly via the terminal. For example, to install llama3:
ollama pull llama3
You can configure Ollama settings in LobeHub under Settings -> AI Providers. Here, you can set the proxy, model name, and more.
<Image alt={'Ollama provider settings'} height={274} src={'/blog/assets28616219/54b3696b-5b13-4761-8c1b-1e664867b2dd.webp'} />
<Callout type={'info'}> To learn how to deploy LobeHub with Ollama integration, visit Integrating with Ollama. </Callout>