Back to Lobehub

Using LM Studio in LobeHub

docs/usage/providers/lmstudio.mdx

2.1.563.3 KB
Original Source

Using LM Studio in LobeHub

<Image alt={'Using LM Studio in LobeHub'} cover src={'/blog/assets28749075f0c4d62c1642694a4ed9ec08.webp'} />

LM Studio is a platform designed for testing and running large language models (LLMs). It offers an intuitive and user-friendly interface, making it ideal for developers and AI enthusiasts. LM Studio supports deploying and running various open-source LLMs locally—such as Deepseek or Qwen—enabling offline AI chatbot functionality that enhances privacy and flexibility.

This guide will walk you through how to use LM Studio within LobeHub:

<Steps> ### Step 1: Download and Install LM Studio
  • Visit the official LM Studio website
  • Choose your operating system and download the installer. LM Studio currently supports macOS, Windows, and Linux
  • Follow the installation instructions and launch LM Studio

<Image alt={'Install and launch LM Studio'} inStep src={'/blog/assets73ba166f1e6d54e8c860b91f61c23355.webp'} />

Step 2: Search and Download a Model

  • Open the Discover tab on the left sidebar to search for models
  • Find a model you’d like to use (e.g., Deepseek R1) and click to download
  • The download may take some time—please be patient

<Image alt={'Search and download a model'} inStep src={'/blog/assets3e2af0090f02059c687b6add6b73a90b.webp'} />

Step 3: Deploy and Run the Model

  • Use the model selector at the top to choose the downloaded model and load it
  • In the pop-up panel, configure the model’s runtime parameters. For detailed settings, refer to the LM Studio documentation

<Image alt={'Configure model runtime parameters'} inStep src={'/blog/assetsbbe90aa719d182d3d2f327e4182732c5.webp'} />

  • Click the Load Model button and wait for the model to fully load and start
  • Once loaded, you can begin chatting with the model in the built-in interface

Step 4: Enable Local API Service

  • To use the model with other applications, you’ll need to start a local API service. This can be done via the Developer panel or from the app menu. By default, LM Studio runs the service on port 1234

<Image alt={'Start local API service'} inStep src={'/blog/assets5fd5fb937b9b05d50ce8659cea3210a4.webp'} />

  • After starting the service, make sure to enable the CORS (Cross-Origin Resource Sharing) option in the service settings. This is required for external applications to access the model

<Image alt={'Enable CORS'} inStep src={'/blog/assets5f8cc99da9c3c1eaca284411833c99e3.webp'} />

Step 5: Connect LM Studio to LobeHub

  • Go to the App Settings in LobeHub and open the AI Service Providers section
  • Find and select the LM Studio provider from the list

<Image alt={'Enter LM Studio API address'} inStep src={'/blog/assetsc52da5833158f3b3143e40bf2a534ac7.webp'} />

  • Enable the LM Studio provider and enter the API service address

<Callout type={'warning'}> If LM Studio is running locally, make sure to enable the "Client Request Mode". </Callout>

  • Add the model you’re running to the model list below
  • Choose a model for your assistant and start chatting

<Image alt={'Select LM Studio model'} inStep src={'/blog/assets4224bf4978bea84e82b3b3aec77656f0.webp'} /> </Steps>

And that’s it! You’re now ready to use models running in LM Studio directly within LobeHub.