content/manuals/ai/model-runner/openwebui-integration.md
Open WebUI is an open-source, self-hosted web interface that provides a ChatGPT-like experience for local AI models. You can connect it to Docker Model Runner to get a polished chat interface for your models.
docker model pull ai/llama3.2)The easiest way to run Open WebUI with Docker Model Runner is using Docker Compose.
Create a compose.yaml file:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:12434
- WEBUI_AUTH=false
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
Start the services:
$ docker compose up -d
Open your browser to http://localhost:3000.
| Variable | Description | Default |
|---|---|---|
OLLAMA_BASE_URL | URL of Docker Model Runner | Required |
WEBUI_AUTH | Enable authentication | true |
OPENAI_API_BASE_URL | Use OpenAI-compatible API instead | - |
OPENAI_API_KEY | API key (use any value for DMR) | - |
If you prefer to use the OpenAI-compatible API instead of the Ollama API:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
environment:
- OPENAI_API_BASE_URL=http://host.docker.internal:12434/engines/v1
- OPENAI_API_KEY=not-needed
- WEBUI_AUTH=false
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
On Docker Desktop, host.docker.internal automatically resolves to the host machine.
The previous example works without modification.
On Docker Engine, you may need to configure the network differently:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
network_mode: host
environment:
- OLLAMA_BASE_URL=http://localhost:12434
- WEBUI_AUTH=false
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
Or use the host gateway:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://172.17.0.1:12434
- WEBUI_AUTH=false
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
ai/ prefix)Open WebUI can pull models directly:
ai/llama3.2Open WebUI provides:
This example sets up Open WebUI with Docker Model Runner and pre-pulls several models:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:12434
- WEBUI_AUTH=false
- DEFAULT_MODELS=ai/llama3.2
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- open-webui:/app/backend/data
depends_on:
model-setup:
condition: service_completed_successfully
model-setup:
image: docker:cli
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: >
sh -c "
docker model pull ai/llama3.2 &&
docker model pull ai/qwen2.5-coder &&
docker model pull ai/smollm2
"
volumes:
open-webui:
For multi-user setups or security, enable authentication:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:12434
- WEBUI_AUTH=true
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
On first visit, you'll create an admin account.
Verify Docker Model Runner is accessible:
$ curl http://localhost:12434/api/tags
Check that models are pulled:
$ docker model list
Verify the OLLAMA_BASE_URL is correct and accessible from the container.
Ensure TCP access is enabled for Docker Model Runner.
On Docker Desktop, verify host.docker.internal resolves:
$ docker run --rm alpine ping -c 1 host.docker.internal
On Docker Engine, try using network_mode: host or the explicit host IP.
First requests load the model into memory, which takes time.
Subsequent requests are much faster.
If consistently slow, consider:
If running Open WebUI on a different host:
Open WebUI supports setting system prompts per model. Configure these in the UI under Settings > Models.
Adjust model parameters in the chat interface:
These settings are passed through to Docker Model Runner.
To run Open WebUI on a different port:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "8080:8080" # Change first port number
# ... rest of config