docs/en/yolov5/environments/docker_image_quickstart_tutorial.md
Welcome to the Ultralytics YOLOv5 Docker Quickstart Guide! This tutorial provides step-by-step instructions for setting up and running YOLOv5 within a Docker container. Using Docker enables you to run YOLOv5 in an isolated, consistent environment, simplifying deployment and dependency management across different systems. This approach leverages containerization to package the application and its dependencies together.
For alternative setup methods, consider our Colab Notebook <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"></a> <a href="https://www.kaggle.com/models/ultralytics/yolov5"></a>, GCP Deep Learning VM, or Amazon AWS guides. For a general overview of Docker usage with Ultralytics models, see the Ultralytics Docker Quickstart Guide.
Before you begin, ensure you have the following installed:
First, verify that your NVIDIA drivers are installed correctly by running:
nvidia-smi
This command should display information about your GPU(s) and the installed driver version.
Next, install the NVIDIA Container Toolkit. The commands below are typical for Debian-based systems like Ubuntu and RHEL-based systems like Fedora/CentOS, but refer to the official guide linked above for instructions specific to your distribution:
=== "Ubuntu/Debian"
```bash
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
```
Update the package lists and install the nvidia-container-toolkit package:
```bash
sudo apt-get update
```
Install Latest version of nvidia-container-toolkit:
```bash
sudo apt-get install -y nvidia-container-toolkit \
nvidia-container-toolkit-base libnvidia-container-tools \
libnvidia-container1
```
??? info "Optional: Install specific version of nvidia-container-toolkit"
Optionally, you can install a specific version of the nvidia-container-toolkit by setting the `NVIDIA_CONTAINER_TOOLKIT_VERSION` environment variable:
```bash
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
sudo apt-get install -y \
nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}
```
```bash
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
```
=== "RHEL/CentOS/Fedora/Amazon Linux"
```bash
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
| sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
```
Update the package lists and install the nvidia-container-toolkit package:
```bash
sudo dnf clean expire-cache
sudo dnf check-update
```
```bash
sudo dnf install \
nvidia-container-toolkit \
nvidia-container-toolkit-base \
libnvidia-container-tools \
libnvidia-container1
```
??? info "Optional: Install specific version of nvidia-container-toolkit"
Optionally, you can install a specific version of the nvidia-container-toolkit by setting the `NVIDIA_CONTAINER_TOOLKIT_VERSION` environment variable:
```bash
export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
sudo dnf install -y \
nvidia-container-toolkit-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
nvidia-container-toolkit-base-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container-tools-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
libnvidia-container1-${NVIDIA_CONTAINER_TOOLKIT_VERSION}
```
```bash
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
```
Run docker info | grep -i runtime to ensure that nvidia appears in the list of runtimes:
docker info | grep -i runtime
You should see nvidia listed as one of the available runtimes.
Ultralytics provides official YOLOv5 images on Docker Hub. The latest tag tracks the most recent repository commit, ensuring you always get the newest version. Pull the image using the following command:
# Define the image name with tag
t=ultralytics/yolov5:latest
# Pull the latest YOLOv5 image from Docker Hub
sudo docker pull $t
You can browse all available images at the Ultralytics YOLOv5 Docker Hub repository.
Once the image is pulled, you can run it as a container.
To run an interactive container instance using only the CPU, use the -it flag. The --ipc=host flag allows sharing of host IPC namespace, which is important for shared memory access.
# Run an interactive container instance using CPU
sudo docker run -it --runtime=nvidia --ipc=host $t
To enable GPU access within the container, use the --gpus flag. This requires the NVIDIA Container Toolkit to be installed correctly.
# Run with access to all available GPUs
sudo docker run -it --runtime=nvidia --ipc=host --gpus all $t
# Run with access to specific GPUs (e.g., GPUs 2 and 3)
sudo docker run -it --runtime=nvidia --ipc=host --gpus '"device=2,3"' $t
Refer to the Docker run reference for more details on command options.
To work with your local files (datasets, model weights, etc.) inside the container, use the -v flag to mount a host directory into the container:
# Mount /path/on/host (your local machine) to /path/in/container (inside the container)
sudo docker run -it --runtime=nvidia --ipc=host --gpus all -v /path/on/host:/path/in/container $t
Replace /path/on/host with the actual path on your machine and /path/in/container with the desired path inside the Docker container (e.g., /usr/src/datasets).
You are now inside the running YOLOv5 Docker container! From here, you can execute standard YOLOv5 commands for various Machine Learning and Deep Learning tasks like Object Detection.
# Train a YOLOv5 model on your custom dataset (ensure data is mounted or downloaded)
python train.py --data your_dataset.yaml --weights yolov5s.pt --img 640 # Start training
# Validate the trained model's performance (Precision, Recall, mAP)
python val.py --weights path/to/your/best.pt --data your_dataset.yaml # Validate accuracy
# Run inference on images or videos using a trained model
python detect.py --weights yolov5s.pt --source path/to/your/images_or_videos # Perform detection
# Export the trained model to various formats like ONNX, CoreML, or TFLite for deployment
python export.py --weights yolov5s.pt --include onnx coreml tflite # Export model
Explore the documentation for detailed usage of different modes:
Learn more about evaluation metrics like Precision, Recall, and mAP. Understand different export formats like ONNX, CoreML, and TFLite, and explore various Model Deployment Options. Remember to manage your model weights effectively.
<p align="center"></p>You have successfully set up and run YOLOv5 within a Docker container.