Back to Lobehub

Using the Local Qwen Model in LobeHub

docs/usage/providers/ollama/qwen.mdx

2.1.561.8 KB
Original Source

Using the Local Qwen Model

<Image alt={'Using Qwen in LobeHub'} cover src={'/blog/assets17870709/b4a01219-e7b1-48a0-888c-f0271b18e3a6.webp'} />

Qwen is an open-source large language model (LLM) developed by Alibaba Cloud. It is officially described as an evolving AI model that achieves more accurate Chinese language understanding through extensive training data.

<Video src="/blog/assets28616219/31e5f625-8dc4-4a5f-a5fd-d28d0457782d.mp4" />

Now, thanks to LobeHub’s integration with Ollama, you can easily use the Qwen model locally within LobeHub.

This guide will walk you through how to use the locally deployed Qwen model in LobeHub:

<Steps> ### Install Ollama Locally

First, you’ll need to install Ollama. For installation instructions, refer to the Ollama usage guide.

Pull the Qwen Model Using Ollama

Once Ollama is installed, you can pull the Qwen model locally. For example, to pull the 14b version of the model, run:

bash
ollama pull qwen:14b

<Image alt={'Pulling the Qwen model using Ollama'} height={473} inStep src={'/blog/assets1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a.webp'} />

Select the Qwen Model

In the chat interface, open the model selection panel and choose the Qwen model.

<Image alt={'Selecting the Qwen model in the model panel'} height={430} inStep src={'/blog/assets28616219/e0608cca-f62f-414a-bc55-28a61ba21f14.webp'} />

<Callout type={'info'}> If you don’t see the Ollama provider in the model selection panel, refer to the Ollama Integration Guide to learn how to enable the Ollama provider in LobeHub. </Callout> </Steps>

You’re now ready to start chatting with the local Qwen model in LobeHub.