docs/usage/providers/lmstudio.mdx
<Image alt={'Using LM Studio in LobeHub'} cover src={'/blog/assets28749075f0c4d62c1642694a4ed9ec08.webp'} />
LM Studio is a platform designed for testing and running large language models (LLMs). It offers an intuitive and user-friendly interface, making it ideal for developers and AI enthusiasts. LM Studio supports deploying and running various open-source LLMs locally—such as Deepseek or Qwen—enabling offline AI chatbot functionality that enhances privacy and flexibility.
This guide will walk you through how to use LM Studio within LobeHub:
<Steps> ### Step 1: Download and Install LM Studio<Image alt={'Install and launch LM Studio'} inStep src={'/blog/assets73ba166f1e6d54e8c860b91f61c23355.webp'} />
Discover tab on the left sidebar to search for models<Image alt={'Search and download a model'} inStep src={'/blog/assets3e2af0090f02059c687b6add6b73a90b.webp'} />
<Image alt={'Configure model runtime parameters'} inStep src={'/blog/assetsbbe90aa719d182d3d2f327e4182732c5.webp'} />
Load Model button and wait for the model to fully load and startDeveloper panel or from the app menu. By default, LM Studio runs the service on port 1234<Image alt={'Start local API service'} inStep src={'/blog/assets5fd5fb937b9b05d50ce8659cea3210a4.webp'} />
CORS (Cross-Origin Resource Sharing) option in the service settings. This is required for external applications to access the model<Image alt={'Enable CORS'} inStep src={'/blog/assets5f8cc99da9c3c1eaca284411833c99e3.webp'} />
App Settings in LobeHub and open the AI Service Providers sectionLM Studio provider from the list<Image alt={'Enter LM Studio API address'} inStep src={'/blog/assetsc52da5833158f3b3143e40bf2a534ac7.webp'} />
<Callout type={'warning'}> If LM Studio is running locally, make sure to enable the "Client Request Mode". </Callout>
<Image alt={'Select LM Studio model'} inStep src={'/blog/assets4224bf4978bea84e82b3b3aec77656f0.webp'} /> </Steps>
And that’s it! You’re now ready to use models running in LM Studio directly within LobeHub.