docker/README.md
First things first:
No docker compose, no persistence, single command, using the official images:
CUDA (NVIDIA GPU):
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
ROCm (AMD GPU):
docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 ghcr.io/invoke-ai/invokeai:main-rocm
Open http://localhost:9090 in your browser once the container finishes booting, install some models, and generate away!
To persist your generated images and downloaded models outside of the container, add a --volume/-v flag to the above command, e.g.:
docker run --volume /some/local/path:/invokeai {...etc...}
/some/local/path/invokeai will contain all your data.
It can usually be reused between different installs of Invoke. Tread with caution and read the release notes!
The included run.sh script is a convenience wrapper around docker compose. It can be helpful for passing additional build arguments to docker compose. Alternatively, the familiar docker compose commands work just as well.
cd docker
cp .env.sample .env
# edit .env to your liking if you need to; it is well commented.
./run.sh
It will take a few minutes to build the image the first time. Once the application starts up, open http://localhost:9090 in your browser to invoke!
[!TIP] When using the
run.shscript, the container will continue running after Ctrl+C. To shut it down, use thedocker compose downcommand.
/etc/docker/daemon.json)docker compose plugin using your package manager, or follow a tutorial.
docker-compose (hyphenated) CLI probably won't work. Update to a recent version.[!TIP] You'll be better off installing Invoke directly on your system, because Docker can not use the GPU on macOS.
If you are still reading:
docker compose V2 supportThis is done via Docker Desktop preferences.
.env.sample and name it .env (cp .env.sample .env (Mac/Linux) or copy example.env .env (Windows)). Make changes as necessary. Set INVOKEAI_ROOT to an absolute path to the desired location of the InvokeAI runtime directory. It may be an existing directory from a previous installation (post 4.0.0).run.shThe image will be built automatically if needed.
The runtime directory (holding models and outputs) will be created in the location specified by INVOKEAI_ROOT. The default location is ~/invokeai. Navigate to the Model Manager tab and install some models before generating.
x86_64 architecture is supported.The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing nvidia-docker-runtime and configuring the nvidia runtime as default. Steps will be different for AMD. Please see Docker/NVIDIA/AMD documentation for the most up-to-date instructions for using your GPU with Docker.
To use an AMD GPU, set GPU_DRIVER=rocm in your .env file before running ./run.sh.
Check the .env.sample file. It contains some environment variables for running in Docker. Copy it, name it .env, and fill it in with your own values. Next time you run run.sh, your custom values will be used.
You can also set these values in docker-compose.yml directly, but .env will help avoid conflicts when code is updated.
Values are optional, but setting INVOKEAI_ROOT is highly recommended. The default is ~/invokeai. Example:
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=cuda
Any environment variables supported by InvokeAI can be set here. See the Configuration docs for further detail.