docs/running-models-locally/lm-studio.mdx
Run AI models locally using LM Studio with Cline.
http://localhost:1234For the best experience with Cline, use Qwen3 Coder 30B A3B Instruct. This model delivers strong coding performance and reliable tool use.
After loading your model in the Developer tab, configure these settings:
Choose quantization based on your RAM:
For optimal performance with local models, enable compact prompts in Cline settings. This reduces the prompt size by 90% while maintaining core functionality.
Navigate to Cline Settings → Features → Use Compact Prompt and toggle it on.