README.md
TensorRT 11.0 is coming soon in 2026 Q2 with powerful new capabilities designed to accelerate your AI inference workflows. With this major version bump, TensorRT's API will be streamlined and a few legacy features will be removed.
We recommend migrating early for the following features:
This repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. It includes the sources for TensorRT plugins and ONNX parser, as well as sample applications demonstrating usage and capabilities of the TensorRT platform. These open source software components are a subset of the TensorRT General Availability (GA) release with some extensions and bug-fixes.
Need enterprise support? NVIDIA global support is available for TensorRT with the NVIDIA AI Enterprise software suite. Check out NVIDIA LaunchPad for free access to a set of hands-on labs with TensorRT hosted on NVIDIA infrastructure.
Join the TensorRT and Triton community and stay current on the latest product updates, bug fixes, content, best practices, and more.
We provide the TensorRT Python package for an easy installation.
To install:
pip install tensorrt
You can skip the Build section to enjoy TensorRT with Python.
To build the TensorRT-OSS components, you will first need the following software packages.
TensorRT GA build
System Packages
Optional Packages
Containerized build
PyPI packages (for demo applications/tests)
Code formatting tools (for contributors)
NOTE: onnx-tensorrt, cub, and protobuf packages are downloaded along with TensorRT OSS, and not required to be installed.
git clone -b main https://github.com/nvidia/TensorRT TensorRT
cd TensorRT
git submodule update --init --recursive
If using the TensorRT OSS build container, TensorRT libraries are preinstalled under /usr/lib/x86_64-linux-gnu and you may skip this step.
Else download and extract the TensorRT GA build from NVIDIA Developer Zone with the direct links below:
Example: Ubuntu 22.04 on x86-64 with cuda-13.1
cd ~/Downloads
tar -xvzf TensorRT-10.15.1.29.Linux.x86_64-gnu.cuda-13.1.tar.gz
export TRT_LIBPATH=`pwd`/TensorRT-10.15.1.29/lib
Example: Windows on x86-64 with cuda-12.9
Expand-Archive -Path TensorRT-10.15.1.29.Windows.win10.cuda-12.9.zip
$env:TRT_LIBPATH="$pwd\TensorRT-10.15.1.29\lib"
For Linux platforms, we recommend that you generate a docker container for building TensorRT OSS as described below. For native builds, please install the prerequisite System Packages.
Example: Ubuntu 24.04 on x86-64 with cuda-13.1 (default)
./docker/build.sh --file docker/ubuntu-24.04.Dockerfile --tag tensorrt-ubuntu24.04-cuda13.1
Example: Rockylinux8 on x86-64 with cuda-13.1
./docker/build.sh --file docker/rockylinux8.Dockerfile --tag tensorrt-rockylinux8-cuda13.1
Example: Ubuntu 24.04 cross-compile for Jetson (aarch64) with cuda-13.1 (JetPack SDK)
./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda13.1
Example: Ubuntu 24.04 on aarch64 with cuda-13.1
./docker/build.sh --file docker/ubuntu-24.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu24.04-cuda13.1
Example: Ubuntu 24.04 build container
./docker/launch.sh --tag tensorrt-ubuntu24.04-cuda13.1 --gpus all
NOTE:
Use the --tag corresponding to build container generated in Step 1.
sudo password for Ubuntu build containers is 'nvidia'.--jupyter <port> for launching Jupyter notebooks.Generate Makefiles and build
Example: Linux (x86-64) build with default cuda-13.1
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
make -j$(nproc)
Example: Linux (aarch64) build with default cuda-13.1
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64-native.toolchain
make -j$(nproc)
Example: Native build on Jetson Thor (aarch64) with cuda-13.1
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64
CC=/usr/bin/gcc make -j$(nproc)
NOTE: C compiler must be explicitly specified via CC= for native aarch64 builds of protobuf.
Example: Ubuntu 24.04 Cross-Compile for Jetson Thor (aarch64) with cuda-13.1 (JetPack)
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64_cross.toolchain
make -j$(nproc)
Example: Ubuntu 24.04 Cross-Compile for DriveOS (aarch64) with cuda-13.1
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64_dos_cross.toolchain
make -j$(nproc)
Example: Native builds on Windows (x86) with cuda-13.1
cd $TRT_OSSPATH
New-Item -ItemType Directory -Path build
cd build
cmake .. -DTRT_LIB_DIR="$env:TRT_LIBPATH" -DTRT_OUT_DIR="$pwd\\out"
msbuild TensorRT.sln /property:Configuration=Release -m:$env:NUMBER_OF_PROCESSORS
NOTE: The default CUDA version used by CMake is 13.1. To override this, for example to 12.9, append
-DCUDA_VERSION=12.9to the cmake command.
Required CMake build arguments are:
TRT_LIB_DIR: Path to the TensorRT installation directory containing libraries.TRT_OUT_DIR: Output directory where generated build artifacts will be copied.Optional CMake build arguments:
CMAKE_BUILD_TYPE: Specify if binaries generated are for release or debug (contain debug symbols). Values consists of [Release] | DebugCUDA_VERSION: The version of CUDA to target, for example [12.9.9].CUDNN_VERSION: The version of cuDNN to target, for example [8.9].PROTOBUF_VERSION: The version of Protobuf to use, for example [3.20.1]. Note: Changing this will not configure CMake to use a system version of Protobuf, it will configure CMake to download and try building that version.CMAKE_TOOLCHAIN_FILE: The path to a toolchain file for cross compilation.BUILD_PARSERS: Specify if the parsers should be built, for example [ON] | OFF. If turned OFF, CMake will try to find precompiled versions of the parser libraries to use in compiling samples. First in ${TRT_LIB_DIR}, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.BUILD_PLUGINS: Specify if the plugins should be built, for example [ON] | OFF. If turned OFF, CMake will try to find a precompiled version of the plugin library to use in compiling samples. First in ${TRT_LIB_DIR}, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.BUILD_SAMPLES: Specify if the samples should be built, for example [ON] | OFF.BUILD_SAFE_SAMPLES: Specify if safety samples should be built, for example [ON] | OFF.TRT_SAFETY_INFERENCE_ONLY: Specify if only build the safety inference components, for example [ON] | OFF. If turned ON, all other components will be turned OFF except BUILD_SAFE_SAMPLES.GPU_ARCHS: GPU (SM) architectures to target. By default we generate CUDA code for all major SMs. Specific SM versions can be specified here as a quoted space-separated list to reduce compilation time and binary size. Table of compute capabilities of NVIDIA GPUs can be found here. Examples: - NVidia A100: -DGPU_ARCHS="80" - RTX 50 series: -DGPU_ARCHS="120" - Multiple SMs: -DGPU_ARCHS="80 120"TRT_PLATFORM_ID: Bare-metal build (unlike containerized cross-compilation). Currently supported options: x86_64 (default).Generate Makefiles and build
Example: Cross-Compile for DOS7 Linux (aarch64)
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DTRT_OUT_DIR=`pwd`/bin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64_dos_cross.toolchain
make -j$(nproc)
Example: Cross-Compile for DOS6.5 Linux (aarch64)
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DTRT_OUT_DIR=`pwd`/bin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64_dos_cross.toolchain -DCUDA_VERSION=11.4 -DGPU_ARCHS=87
make -j$(nproc)
Example: Native build for DOS6.5 and DOS7 Linux (aarch64)
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64-native.toolchain -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF
make -j$(nproc)
Example: Cross-Compile for DOS6.5 QNX (aarch64)
cd $TRT_OSSPATH
mkdir -p build && cd build
export CUDA_VERSION=11.4
export CUDA=cuda-$CUDA_VERSION
export CUDA_ROOT=/usr/local/cuda-safe-$CUDA_VERSION
export QNX_BASE=/drive/toolchains/qnx_toolchain # Set to your QNX toolchain installation path
export QNX_HOST=$QNX_BASE/host/linux/x86_64/
export QNX_TARGET=$QNX_BASE/target/qnx7/
export PATH=$PATH:$QNX_HOST/usr/bin
cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DBUILD_SAFE_SAMPLES=OFF -DCMAKE_CUDA_COMPILER=$CUDA_ROOT/bin/nvcc -DTRT_OUT_DIR=`pwd`/bin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_qnx.toolchain -DCUDA_VERSION=$CUDA_VERSION -DGPU_ARCHS=87
make -j$(nproc)
NOTE: Set
QNX_BASEto your QNX toolchain installation path. If your CUDA version is not the same as in the example, setCUDA_VERSION(for examples that use it in multiple places) or add-DCUDA_VERSION=<version>to the cmake command.
Example: Cross-Compile for DOS6.5 QNX Safety (aarch64)
cd $TRT_OSSPATH
mkdir -p build && cd build
export CUDA_VERSION=11.4
export QNX_BASE=/drive/toolchains/qnx_toolchain # Set to your QNX toolchain installation path
export QNX_HOST=$QNX_BASE/host/linux/x86_64/
export QNX_TARGET=$QNX_BASE/target/qnx7/
export PATH=$PATH:$QNX_HOST/usr/bin
export CUDA=cuda-$CUDA_VERSION
export CUDA_ROOT=/usr/local/cuda-safe-$CUDA_VERSION
cmake .. -DBUILD_SAMPLES=OFF -DBUILD_SAFE_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DTRT_SAFETY_INFERENCE_ONLY=ON -DTRT_OUT_DIR=`pwd`/bin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_qnx_safe.toolchain -DCUDA_VERSION=$CUDA_VERSION -DCMAKE_CUDA_COMPILER=$CUDA_ROOT/bin/nvcc -DGPU_ARCHS=87
make -j$(nproc)
NOTE: Set
QNX_BASEto your QNX toolchain installation path. If your CUDA version is not the same as in the example, setCUDA_VERSION(for examples that use it in multiple places) or add-DCUDA_VERSION=<version>to the cmake command.
Example: Cross-Compile for DOS7 QNX (aarch64)
cd $TRT_OSSPATH
mkdir -p build && cd build
export CUDA_VERSION=13.1
export CUDA=cuda-$CUDA_VERSION
export CUDA_ROOT=/usr/local/cuda-safe-$CUDA_VERSION
export QNX_BASE=/drive/toolchains/qnx_toolchain # Set to your QNX toolchain installation path
export QNX_HOST=$QNX_BASE/host/linux/x86_64/
export QNX_TARGET=$QNX_BASE/target/qnx/
export PATH=$PATH:$QNX_HOST/usr/bin
cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DBUILD_SAFE_SAMPLES=OFF -DCMAKE_CUDA_COMPILER=$CUDA_ROOT/bin/nvcc -DTRT_OUT_DIR=`pwd`/bin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_qnx.toolchain -DCUDA_VERSION=$CUDA_VERSION -DGPU_ARCHS=110
make -j$(nproc)
NOTE: Set
QNX_BASEto your QNX toolchain installation path. If your CUDA version is not the same as in the example, setCUDA_VERSION(for examples that use it in multiple places) or add-DCUDA_VERSION=<version>to the cmake command.