site/en/install/docker.md
Docker uses containers to create virtual environments that isolate a TensorFlow installation from the rest of the system. TensorFlow programs are run within this virtual environment that can share resources with its host machine (access directories, use the GPU, connect to the Internet, etc.). The TensorFlow Docker images are tested for each release.
Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need to be installed).
docker -v. Versions earlier than 19.03 require nvidia-docker2 and the --runtime=nvidia flag. On versions including and after 19.03, you will use the nvidia-container-toolkit package and the --gpus all flag. Both options are documented on the page linked above.Note: To run the docker command without sudo, create the docker group and
add your user. For details, see the
post-installation steps for Linux.
The official TensorFlow Docker images are located in the tensorflow/tensorflow Docker Hub repository. Image releases are tagged using the following format:
| Tag | Description |
|---|---|
latest | The latest release of TensorFlow CPU binary image. Default. |
nightly | Nightly builds of the TensorFlow image. (Unstable.) |
version | Specify the version of the TensorFlow binary image, for example: 2.8.3 |
Each base tag has variants that add or change functionality:
| Tag Variants | Description |
|---|---|
tag-gpu | The specified tag release with GPU support. (See below) |
tag-jupyter | The specified tag release with Jupyter (includes TensorFlow tutorial notebooks) |
You can use multiple variants at once. For example, the following downloads TensorFlow release images to your machine:
<pre class="devsite-click-to-copy prettyprint lang-bsh"> <code class="devsite-terminal">docker pull tensorflow/tensorflow # latest stable release</code> <code class="devsite-terminal">docker pull tensorflow/tensorflow:devel-gpu # nightly dev release w/ GPU support</code> <code class="devsite-terminal">docker pull tensorflow/tensorflow:latest-gpu-jupyter # latest release w/ GPU support and Jupyter</code> </pre>To start a TensorFlow-configured container, use the following command form:
<pre class="devsite-terminal devsite-click-to-copy"> docker run [-it] [--rm] [-p <em>hostPort</em>:<em>containerPort</em>] tensorflow/tensorflow[:<em>tag</em>] [<em>command</em>] </pre>For details, see the docker run reference.
Let's verify the TensorFlow installation using the latest tagged image. Docker
downloads a new TensorFlow image the first time it is run:
Success: TensorFlow is now installed. Read the tutorials to get started.
Let's demonstrate some more TensorFlow Docker recipes. Start a bash shell
session within a TensorFlow-configured container:
Within the container, you can start a python session and import TensorFlow.
To run a TensorFlow program developed on the host machine within a container,
mount the host directory and change the container's working directory
(-v hostDir:containerDir -w workDir):
Permission issues can arise when files created within a container are exposed to the host. It's usually best to edit files on the host system.
Start a Jupyter Notebook server using TensorFlow's nightly build:
<pre class="devsite-terminal devsite-click-to-copy"> docker run -it -p 8888:8888 tensorflow/tensorflow:nightly-jupyter </pre>Follow the instructions and open the URL in your host web browser:
http://127.0.0.1:8888/?token=...
Docker is the easiest way to run TensorFlow on a GPU since the host machine only requires the NVIDIA® driver (the NVIDIA® CUDA® Toolkit is not required).
Install the Nvidia Container Toolkit
to add NVIDIA® GPU support to Docker. nvidia-container-runtime is only
available for Linux. See the nvidia-container-runtime
platform support FAQ
for details.
Check if a GPU is available:
<pre class="devsite-terminal devsite-click-to-copy"> lspci | grep -i nvidia </pre>Verify your nvidia-docker installation:
Note: nvidia-docker v2 uses --runtime=nvidia instead of --gpus all. nvidia-docker v1 uses the nvidia-docker alias,
rather than the --runtime=nvidia or --gpus all command line flags.
Download and run a GPU-enabled TensorFlow image (may take a few minutes):
<pre class="devsite-terminal devsite-click-to-copy prettyprint lang-bsh"> docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu \ python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))" </pre>It can take a while to set up the GPU-enabled image. If repeatedly running
GPU-based scripts, you can use docker exec to reuse a container.
Use the latest TensorFlow GPU image to start a bash shell session in the container:
Success: TensorFlow is now installed. Read the tutorials to get started.