docs/docker.md
This guide provides comprehensive instructions for integrating Pipenv with Docker, including best practices, optimization techniques, and example configurations for different scenarios.
Docker containers provide isolated, reproducible environments for applications, while Pipenv manages Python dependencies. When used together, they create a powerful workflow for Python application deployment.
Here's a basic Dockerfile that uses Pipenv:
FROM python:3.10-slim
# Install pipenv
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy Pipfile and Pipfile.lock
COPY Pipfile Pipfile.lock ./
# Install dependencies
RUN pipenv install --deploy --system
# Copy application code
COPY . .
# Run the application
CMD ["python", "app.py"]
This approach:
--system: Installs packages to the system Python instead of creating a virtual environment--deploy: Ensures the Pipfile.lock is up-to-date and fails if it isn't--ignore-pipfile: Uses only the lock file for installation, ignoring the PipfileDocker containers provide isolated environments, which raises the question: should you use --system to install packages globally, or create a virtual environment inside the container?
Using --system (Recommended for most Docker use cases)
pipenv run for executing scripts defined in PipfileUsing a Virtual Environment
add-apt-repository in Ubuntu)apt-get install python3-* packages alongside your applicationFor most production containers that run a single Python application, --system is the simpler and more efficient choice. The concerns about breaking system Python tools (raised in older guidance) are less relevant when:
Multi-stage builds create smaller, more secure images by separating the build environment from the runtime environment:
FROM python:3.10-slim AS builder
# Install pipenv
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files
COPY Pipfile Pipfile.lock ./
# Install dependencies
RUN pipenv install --deploy --system
FROM python:3.10-slim
# Set working directory
WORKDIR /app
# Copy installed packages from builder stage
COPY --from=builder /usr/local/lib/python3.10/site-packages /usr/local/lib/python3.10/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
# Copy application code
COPY . .
# Run as non-root user
RUN useradd -m appuser
USER appuser
# Run the application
CMD ["python", "app.py"]
This approach:
To take advantage of Docker's layer caching and speed up builds:
FROM python:3.10-slim
# Install pipenv
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files only
COPY Pipfile Pipfile.lock ./
# Install dependencies
RUN pipenv install --deploy --system
# Copy application code (changes more frequently)
COPY . .
# Run the application
CMD ["python", "app.py"]
This separates dependency installation from code copying, so dependencies are only reinstalled when Pipfile or Pipfile.lock change.
For development environments, you might want to include development dependencies:
FROM python:3.10-slim
# Install pipenv
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files
COPY Pipfile Pipfile.lock ./
# Install dependencies including development packages
RUN pipenv install --dev --system
# Copy application code
COPY . .
# Run the development server
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
For production, focus on security and minimal image size:
FROM python:3.10-slim AS builder
# Install pipenv and dependencies
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files
COPY Pipfile Pipfile.lock ./
# Install only production dependencies
RUN pipenv install --deploy --system
FROM python:3.10-slim
# Set working directory
WORKDIR /app
# Copy installed packages from builder stage
COPY --from=builder /usr/local/lib/python3.10/site-packages /usr/local/lib/python3.10/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
# Copy application code
COPY . .
# Create and use non-root user
RUN useradd -m appuser
USER appuser
# Set production environment variables
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PYTHONPATH=/app
# Run the application with gunicorn (for web applications)
CMD ["gunicorn", "app:app", "--bind", "0.0.0.0:8000", "--workers", "4"]
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
volumes:
- .:/app
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/app
depends_on:
- db
db:
image: postgres:14
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=app
volumes:
postgres_data:
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "8000:8000"
volumes:
- .:/app
environment:
- DEBUG=True
- DATABASE_URL=postgresql://postgres:postgres@db:5432/app
depends_on:
- db
command: python manage.py runserver 0.0.0.0:8000
db:
image: postgres:14
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=app
volumes:
postgres_data:
For some workflows, you might want to use a project-local virtual environment:
FROM python:3.10-slim
# Install pipenv
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files
COPY Pipfile Pipfile.lock ./
# Create a project-local virtual environment
ENV PIPENV_VENV_IN_PROJECT=1
RUN pipenv install --deploy
# Copy application code
COPY . .
# Run the application using the virtual environment
CMD ["./.venv/bin/python", "app.py"]
If your application requires a specific Python version:
FROM python:3.10-slim
# Install pipenv
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files
COPY Pipfile Pipfile.lock ./
# Install dependencies
RUN pipenv install --deploy --system --python 3.10
# Copy application code
COPY . .
# Run the application
CMD ["python", "app.py"]
For private packages or custom indexes:
FROM python:3.10-slim
# Install pipenv
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files
COPY Pipfile Pipfile.lock ./
# Set environment variables for private repository authentication
ARG PRIVATE_REPO_USERNAME
ARG PRIVATE_REPO_PASSWORD
ENV PIP_EXTRA_INDEX_URL=https://${PRIVATE_REPO_USERNAME}:${PRIVATE_REPO_PASSWORD}@private-repo.example.com/simple
# Install dependencies
RUN pipenv install --deploy --system
# Copy application code
COPY . .
# Run the application
CMD ["python", "app.py"]
Always run your application as a non-root user:
FROM python:3.10-slim
# Install pipenv and dependencies
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files
COPY Pipfile Pipfile.lock ./
# Install dependencies
RUN pipenv install --deploy --system
# Copy application code
COPY . .
# Create and use non-root user
RUN useradd -m appuser && \
chown -R appuser:appuser /app
USER appuser
# Run the application
CMD ["python", "app.py"]
Use build arguments and environment variables for secrets:
FROM python:3.10-slim
# Install pipenv
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files
COPY Pipfile Pipfile.lock ./
# Use build arguments for secrets (only available during build)
ARG API_KEY
ENV API_KEY=${API_KEY}
# Install dependencies
RUN pipenv install --deploy --system
# Copy application code
COPY . .
# Run the application
CMD ["python", "app.py"]
Then build with:
$ docker build --build-arg API_KEY=your-secret-key -t your-image .
For runtime secrets, use environment variables or Docker secrets.
Integrate security scanning into your Docker workflow:
FROM python:3.10-slim AS builder
# Install pipenv
RUN pip install pipenv
# Set working directory
WORKDIR /app
# Copy dependency files
COPY Pipfile Pipfile.lock ./
# Install dependencies
RUN pipenv install --deploy --system
# Scan for vulnerabilities
RUN pipenv scan
FROM python:3.10-slim
# Set working directory
WORKDIR /app
# Copy installed packages from builder stage
COPY --from=builder /usr/local/lib/python3.10/site-packages /usr/local/lib/python3.10/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
# Copy application code
COPY . .
# Run the application
CMD ["python", "app.py"]
name: Build and Push Docker Image
on:
push:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: yourusername/yourapp:latest
stages:
- build
- test
- deploy
build:
stage: build
image: docker:20.10.16
services:
- docker:20.10.16-dind
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
test:
stage: test
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
script:
- python -m pytest
deploy:
stage: deploy
image: docker:20.10.16
services:
- docker:20.10.16-dind
script:
- docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:latest
If you encounter permission issues:
# Add this to your Dockerfile
RUN pip install --user pipenv
ENV PATH="/root/.local/bin:${PATH}"
For packages with system dependencies:
FROM python:3.10-slim
# Install system dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Install pipenv
RUN pip install pipenv
# Continue with your Dockerfile...
To speed up builds:
# Use a specific pip version
RUN pip install --upgrade pip==22.2.2 pipenv==2022.8.5
# Use pip cache
ENV PIP_CACHE_DIR=/var/cache/pip
# Install with multiple workers
RUN pipenv install --deploy --system --extra-pip-args="--use-feature=fast-deps"
Use multi-stage builds to create smaller, more secure images
Separate dependency installation from code copying to leverage Docker's layer caching
Run applications as non-root users to improve security
Use --deploy flag to ensure Pipfile.lock is up-to-date
Use --system for single-application containers to install dependencies system-wide, or use a virtual environment if your container runs multiple Python applications
Include only necessary files in your Docker image
Set appropriate environment variables like PYTHONUNBUFFERED=1 for better logging
Scan for vulnerabilities as part of your build process
Use build arguments for build-time configuration
Use environment variables or Docker secrets for runtime configuration
Integrating Pipenv with Docker creates a powerful workflow for Python application deployment. By following the best practices and examples in this guide, you can create efficient, secure, and reproducible Docker images for your Python applications.
Remember that Docker and Pipenv are both tools that help with reproducibility and dependency management. When used together correctly, they complement each other and provide a robust solution for Python application deployment.