docs/usage/providers/ollama/gemma.mdx
<Image alt={'Using Gemma in LobeHub'} cover rounded src={'/blog/assets17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d.webp'} />
Gemma is an open-source large language model (LLM) developed by Google. It is designed to be a general-purpose and flexible model for a wide range of natural language processing (NLP) tasks. Now, thanks to LobeHub’s integration with Ollama, you can easily use Google Gemma directly within LobeHub.
This guide will walk you through how to use the Google Gemma model in LobeHub:
<Steps> ### Install Ollama LocallyFirst, you’ll need to install Ollama. For installation instructions, refer to the Ollama usage guide.
Once Ollama is installed, you can pull the Google Gemma model locally. For example, to pull the 7b model, run the following command:
ollama pull gemma
<Image alt={'Pulling the Gemma model using Ollama'} height={473} inStep src={'/blog/assets28616219/7049a811-a08b-45d3-8491-970f579c2ebd.webp'} width={791} />
In the chat interface, open the model selection panel and choose the Gemma model.
<Image alt={'Selecting the Gemma model in the model panel'} height={629} inStep src={'/blog/assets28616219/69414c79-642e-4323-9641-bfa43a74fcc8.webp'} width={791} />
<Callout type={'info'}> If you don’t see the Ollama provider in the model selection panel, refer to the Ollama Integration Guide to learn how to enable the Ollama provider in LobeHub. </Callout> </Steps>
You’re all set! You can now start chatting with the local Gemma model directly in LobeHub.