content/manuals/ai/model-runner/_index.md
{{< summary-bar feature_name="Docker Model Runner" >}}
Docker Model Runner (DMR) makes it easy to manage, run, and deploy AI models using Docker. Designed for developers, Docker Model Runner streamlines the process of pulling, running, and serving large language models (LLMs) and other AI models directly from Docker Hub, any OCI-compliant registry, or Hugging Face.
With seamless integration into Docker Desktop and Docker Engine, you can serve models via OpenAI and Ollama-compatible APIs, package GGUF files as OCI Artifacts, and interact with models from both the command line and graphical interface.
Whether you're building generative AI applications, experimenting with machine learning workflows, or integrating AI into your software development lifecycle, Docker Model Runner provides a consistent, secure, and efficient way to work with AI models locally.
Docker Model Runner is supported on the following platforms:
{{< tabs >}} {{< tab name="Windows">}}
Windows(amd64):
Windows(arm64):
OpenCL for Adreno
Qualcomm Adreno GPU (6xx series and later)
[!NOTE] Some llama.cpp features might not be fully supported on the 6xx series.
{{< /tab >}} {{< tab name="MacOS">}}
{{< /tab >}} {{< tab name="Linux">}}
Docker Engine only:
{{< /tab >}} {{</tabs >}}
Models are pulled from Docker Hub, an OCI-compliant registry, or Hugging Face the first time you use them and are stored locally. They load into memory only at runtime when a request is made, and unload when not in use to optimize resources. Because models can be large, the initial pull may take some time. After that, they're cached locally for faster access. You can interact with the model using OpenAI and Ollama-compatible APIs.
Docker Model Runner supports three inference engines:
| Engine | Best for | Model format |
|---|---|---|
| llama.cpp | Local development, resource efficiency | GGUF (quantized) |
| vLLM | Production, high throughput | Safetensors |
| Diffusers | Image generation (Stable Diffusion) | Safetensors |
llama.cpp is the default engine and works on all platforms. vLLM requires NVIDIA GPUs and is supported on Linux x86_64 and Windows with WSL2. Diffusers enables image generation and requires NVIDIA GPUs on Linux (x86_64 or ARM64). See Inference engines for detailed comparison and setup.
Models have a configurable context size (context length) that determines how many tokens they can process. The default varies by model but is typically 2,048-8,192 tokens. You can adjust this per-model:
$ docker model configure --context-size 8192 ai/qwen2.5-coder
See Configuration options for details on context size and other parameters.
[!TIP]
Using Testcontainers or Docker Compose? Testcontainers for Java and Go, and Docker Compose now support Docker Model Runner.
docker model is not recognisedIf you run a Docker Model Runner command and see:
docker: 'model' is not a docker command
It means Docker can't find the plugin because it's not in the expected CLI plugins directory.
To fix this, create a symlink so Docker can detect it:
$ ln -s /Applications/Docker.app/Contents/Resources/cli-plugins/docker-model ~/.docker/cli-plugins/docker-model
Once linked, rerun the command.
Docker Model Runner respects your privacy settings in Docker Desktop. Data collection is controlled by the Send usage statistics setting:
When using Docker Model Runner with Docker Engine, HEAD requests to Docker Hub are made to track model names, regardless of any settings.
No prompt content, responses, or personally identifiable information is ever collected.
Thanks for trying out Docker Model Runner. To report bugs or request features, open an issue on GitHub. You can also give feedback through the Give feedback link next to the Enable Docker Model Runner setting.