docker/README.md
We provide pre-built Docker images for quick setup. And from this version, we utilize a new image release hierarchy for productivity and stability.
Start from v0.6.0, we use vllm and sglang release image as our base image.
Start from v0.7.0, since vllm/vllm-openai:v0.12.0 is a minimal image without some essential libraries, we use nvidia/cuda:12.9.1-devel-ubuntu22.04 as our base image for vllm.
Upon base image, the following packages are added:
Latest docker file:
All pre-built images are available in dockerhub: https://hub.docker.com/r/verlai/verl. For example, verlai/verl:sgl059.latest, verlai/verl:vllm017.latest.
You can find the latest images used for development and ci in our github workflows:
After pulling the desired Docker image and installing desired inference and training frameworks, you can run it with the following steps:
docker create --runtime=nvidia --gpus all --net=host --shm-size="10g" --cap-add=SYS_ADMIN -v .:/workspace/verl --name verl <image:tag> sleep infinity
docker start verl
docker exec -it verl bash
# install the nightly version (recommended)
git clone https://github.com/volcengine/verl && cd verl
pip3 install --no-deps -e .
[Optional] If you hope to switch between different frameworks, you can install verl with the following command:
# install the nightly version (recommended)
git clone https://github.com/volcengine/verl && cd verl
pip3 install -e .[vllm]
pip3 install -e .[sglang]