docs/en/providers/provider-lmstudio.md
LM Studio allows you to deploy models locally on your computer (hardware requirements must be met).
Follow the LM Studio instructions to download and run your desired model, e.g. deepseek-r1-qwen-7b:
lms get deepseek-r1-qwen-7b
In AstrBot:
Go to Configuration → Service Providers → + → OpenAI
Set API Base URL to http://localhost:1234/v1
Set API Key to lm-studio
For users deploying AstrBot via Docker Desktop on Mac or Windows, set
API Base URLtohttp://host.docker.internal:1234/v1.For users deploying AstrBot via Docker on Linux, set
API Base URLtohttp://172.17.0.1:1234/v1, or replace172.17.0.1with your server's public IP (make sure port 1234 is open on the host).
If LM Studio itself is deployed in Docker, ensure port 1234 is mapped to the host.
Set the model name to the one you selected in the previous step, then save the configuration.
Run
/providerto view the models configured in AstrBot.