Installing TensorRT#
When installing TensorRT, you can choose between the following installation options: Debian or RPM packages, a Python wheel file, a tar file, or a zip file.
The Debian and RPM installations automatically install any dependencies; however, it:
Requires
sudo
or root privileges to install.It provides no flexibility as to which location TensorRT is installed.
It requires that the CUDA Toolkit be installed using Debian or RPM packages.
Does not allow more than one minor version of TensorRT to be installed at the same time.
The tar file provides more flexibility, such as installing multiple versions of TensorRT simultaneously. However, you must install the necessary dependencies and manage LD_LIBRARY_PATH
yourself. For more information, refer to Tar File Installation.
TensorRT versions: TensorRT is a product made up of separately versioned components. The product version conveys important information about the significance of new features, while the library version conveys information about the compatibility or incompatibility of the API.
Product/Component |
Previous Released Version |
Current Version |
Version Description |
|
---|---|---|---|---|
TensorRT product |
10.8.0 |
10.9.0 |
|
|
|
10.8.0 |
10.9.0 |
|
|
|
10.8.0 |
10.9.0 |
|
|
|
10.8.0 |
10.9.0 |
|
|
|
|
10.8.0 |
10.9.0 |
|
|
|
10.8.0 |
10.9.0 |
|
|
|
10.8.0 |
10.9.0 |
|
|
|
10.8.0 |
10.9.0 |
|
Python Package Index Installation#
This section contains instructions for installing TensorRT from the Python Package Index.
When installing TensorRT from the Python Package Index, you’re not required to install TensorRT from a .tar
, .deb
, .rpm
, or .zip
package. All the necessary libraries are included in the Python package. However, the header files, which may be needed to access TensorRT C++ APIs or compile plugins written in C++, are not included. Additionally, if you already have the TensorRT C++ libraries installed, using the Python package index version will install a redundant copy of these libraries, which may not be desirable. Refer to Tar File Installation for information on manually installing TensorRT wheels that do not bundle the C++ libraries. You can stop after this section if you only need Python support.
The tensorrt
Python wheel files currently support versions 3.8 to 3.13 and will not work with other versions. Linux and Windows operating systems and x86_64 and ARM SBSA CPU architectures are presently supported. The Linux x86 Python wheels are expected to work on RHEL 8 or newer and Ubuntu 20.04 or newer. The Linux SBSA Python wheels are expected to work on Ubuntu 20.04 or newer. The Windows x64 Python wheels are expected to work on Windows 10 or newer.
Note
If you do not have root access, you are running outside a Python virtual environment, or for any other reason you would prefer a user installation, then append --user
to any of the pip
commands provided.
Ensure the
pip
Python module is up-to-date and thewheel
Python module is installed before proceeding, or you may encounter issues during the TensorRT Python installation.python3 -m pip install --upgrade pip python3 -m pip install wheel
Install the TensorRT Python wheel.
Note
You may need to update the
setuptools
andpackaging
Python modules if you encounterTypeError
while performing thepip install
command below.If upgrading to a newer version of TensorRT, you may need to run the command
pip cache remove "tensorrt*"
to ensure thetensorrt
meta packages are rebuilt, and the latest dependent packages are installed.
python3 -m pip install --upgrade tensorrt
The above
pip
command will pull in all the required CUDA libraries in Python wheel format from PyPI because they are dependencies of the TensorRT Python wheel. Also, it will upgradetensorrt
to the latest version if you have a previous version installed.A TensorRT Python Package Index installation is split into multiple modules:
TensorRT libraries (
tensorrt-libs
).Python bindings matching the Python version in use (
tensorrt-bindings
).Frontend package, which pulls in the correct version of dependent TensorRT modules (
tensorrt
).You can append
-cu11
or-cu12
to any Python module if you require a different CUDA major version. When unspecified, the TensorRT Python meta-packages default to the CUDA 12.x variants, the latest CUDA version supported by TensorRT. For example:
python3 -m pip install tensorrt-cu11 tensorrt-lean-cu11 tensorrt-dispatch-cu11
Optionally, install the TensorRT lean or dispatch runtime wheels, similarly split into multiple Python modules. If you only use TensorRT to run pre-built version compatible engines, you can install these wheels without the regular TensorRT wheel.
python3 -m pip install --upgrade tensorrt-lean python3 -m pip install --upgrade tensorrt-dispatch
To verify that your installation is working, use the following Python commands:
Import the
tensorrt
Python module.Confirm that the correct version of TensorRT has been installed.
Create a
Builder
object to verify that your CUDA installation is working.
python3 >>> import tensorrt >>> print(tensorrt.__version__) >>> assert tensorrt.Builder(tensorrt.Logger())
Use a similar procedure to verify that the lean and dispatch modules work as expected:
python3 >>> import tensorrt_lean as trt >>> print(trt.__version__) >>> assert trt.Runtime(trt.Logger()) python3 >>> import tensorrt_dispatch as trt >>> print(trt.__version__) >>> assert trt.Runtime(trt.Logger())
Suppose the final Python command fails with an error message similar to the error message below. In that case, you may not have the NVIDIA driver installed, or the NVIDIA driver may not be working properly. If you are running inside a container, try starting from one of the
nvidia/cuda:x.y-base-<os>
containers.[TensorRT] ERROR: CUDA initialization failure with error 100. Please check your CUDA installation: ...
If the Python commands above worked, you should now be able to run any of the TensorRT Python samples to confirm further that your TensorRT installation is working. For more information about TensorRT samples, refer to the Sample Support Guide.
Downloading TensorRT#
Ensure you are a member of the NVIDIA Developer Program. If you need help, follow the prompts to gain access.
Click GET STARTED, then click Download Now.
Select the version of TensorRT that you are interested in.
Select the checkbox to agree to the license terms.
Click the package you want to install. Your download begins.
Debian Installation#
Using a Local Repo for Debian Installation#
This section contains instructions for a developer installation. This installation method is for new users or users who want the complete developer installation, including samples and documentation for both the C++ and Python APIs.
For advanced users who are already familiar with TensorRT and want to get their application running quickly, are using an NVIDIA CUDA container, or want to set automation, follow the network repo installation instructions (refer to Using The NVIDIA CUDA Network Repo For Debian Installation).
Note
When installing Python packages using this method, you must manually install TensorRT’s Python dependencies with pip
.
Prerequisites
Ensure that you have the following dependencies installed.
CUDA
cuDNN 8.9.7 (Optional and not required for lean or dispatch runtime installations.)
Installation
Install CUDA according to the CUDA installation instructions.
Download the TensorRT local repo file that matches the Ubuntu version and CPU architecture you are using.
Install TensorRT from the Debian local repo package. Replace
ubuntuxx04
,10.x.x
, andcuda-x.x
with your specific OS, TensorRT, and CUDA versions. For ARM SBSA and JetPack users, replaceamd64
witharm64
. JetPack users also need to replacenv-tensorrt-local-repo
withnv-tensorrt-local-tegra-repo
.os="ubuntuxx04" tag="10.x.x-cuda-x.x" sudo dpkg -i nv-tensorrt-local-repo-${os}-${tag}_1.0-1_amd64.deb sudo cp /var/nv-tensorrt-local-repo-${os}-${tag}/*-keyring.gpg /usr/share/keyrings/ sudo apt-get update
Package Type Install# Package Type
Command
For the full C++ and Python runtimes
sudo apt-get install tensorrt
For the lean runtime only, instead of
tensorrt
sudo apt-get install libnvinfer-lean10 sudo apt-get install libnvinfer-vc-plugin10
For lean runtime Python package
sudo apt-get install python3-libnvinfer-lean
For the dispatch runtime only, instead of
tensorrt
sudo apt-get install libnvinfer-dispatch10 sudo apt-get install libnvinfer-vc-plugin10
For dispatch runtime Python package
sudo apt-get install python3-libnvinfer-dispatch
For all TensorRT Python packages without samples
python3 -m pip install numpy sudo apt-get install python3-libnvinfer-dev
The following additional packages will be installed:
python3-libnvinfer
python3-libnvinfer-lean
python3-libnvinfer-dispatch
If you want to install Python packages only for the lean or dispatch runtime, specify these individually rather than installing the
dev
package.If you require Python modules for a Python version other than the system’s default Python version, you should install the
*.whl
files directly from the tar package.If you want to run samples that require
onnx-graphsurgeon
or use the Python module for your project.python3 -m pip install numpy onnx onnx-graphsurgeon
Verify the installation.
Package Type Verification# Package Type
Command
You should see something similar to the following
For the full TensorRT release
dpkg-query -W tensorrt
tensorrt 10.9.0.x-1+cuda12.8
For the lean runtime or the dispatch runtime only
dpkg-query -W "*nvinfer*"
You should see all related
libnvinfer*
files you installed.
Using The NVIDIA CUDA Network Repo For Debian Installation#
This installation method is for advanced users who are already familiar with TensorRT and want to get their application running quickly or to set up automation, such as when using containers. New users or users who want the complete installation, including samples and documentation, should follow the local repo installation instructions (refer to Debian Installation).
Note
If you are using a CUDA container, then the NVIDIA CUDA network repository will already be set up, and you can skip step 1.
Follow the CUDA Toolkit Download page instructions to install the CUDA network repository.
Select the Linux operating system.
Select the desired architecture.
Select the Ubuntu distribution.
Select the desired Ubuntu version.
Select the deb (network) installer type.
Enter the commands provided into your terminal.
You can omit the final
apt-get install
command if you do not require the entire CUDA Toolkit. While installing TensorRT,apt
downloads the required CUDA dependencies for you automatically.Install the TensorRT package that fits your particular needs.
Package Type Install# Package Type
Command
For the lean runtime only
sudo apt-get install libnvinfer-lean10
For the lean runtime Python package
sudo apt-get install python3-libnvinfer-lean
For the dispatch runtime only
sudo apt-get install libnvinfer-dispatch10
For the dispatch runtime Python package
sudo apt-get install python3-libnvinfer-dispatch
For only running TensorRT C++ applications
sudo apt-get install tensorrt-libs
For also building TensorRT C++ applications
sudo apt-get install tensorrt-dev
For also building TensorRT C++ applications with lean only
sudo apt-get install libnvinfer-lean-dev
For also building TensorRT C++ applications with dispatch only
sudo apt-get install libnvinfer-dispatch-dev
For the standard runtime Python package
python3 -m pip install numpy sudo apt-get install python3-libnvinfer
If you require additional Python modules
If your application requires other Python modules, such as
onnx-graphsurgeon
, then usepip
to install them. Refer to onnx-graphsurgeon · PyPI for additional information.Ubuntu will install TensorRT for the latest CUDA version by default when using the CUDA network repository. The following commands will install
tensorrt
and related TensorRT packages for an older CUDA version and hold these packages at this version. Replace10.x.x.x
with your version of TensorRT andcudax.x
with your CUDA version for your installation.version="10.x.x.x-1+cudax.x" sudo apt-get install libnvinfer-bin=${version} libnvinfer-dev=${version} libnvinfer-dispatch-dev=${version} libnvinfer-dispatch10=${version} libnvinfer-headers-dev=${version} libnvinfer-headers-plugin-dev=${version} libnvinfer-lean-dev=${version} libnvinfer-lean10=${version} libnvinfer-plugin-dev=${version} libnvinfer-plugin10=${version} libnvinfer-samples=${version} libnvinfer-vc-plugin-dev=${version} libnvinfer-vc-plugin10=${version} libnvinfer10=${version} libnvonnxparsers-dev=${version} libnvonnxparsers10=${version} python3-libnvinfer-dev=${version} python3-libnvinfer-dispatch=${version} python3-libnvinfer-lean=${version} python3-libnvinfer=${version} tensorrt-dev=${version} tensorrt-libs=${version} tensorrt=${version} sudo apt-mark hold libnvinfer-bin libnvinfer-dev libnvinfer-dispatch-dev libnvinfer-dispatch10 libnvinfer-headers-dev libnvinfer-headers-plugin-dev libnvinfer-lean-dev libnvinfer-lean10 libnvinfer-plugin-dev libnvinfer-plugin10 libnvinfer-samples libnvinfer-vc-plugin-dev libnvinfer-vc-plugin10 libnvinfer10 libnvonnxparsers-dev libnvonnxparsers10 python3-libnvinfer-dev python3-libnvinfer-dispatch python3-libnvinfer-lean python3-libnvinfer tensorrt-dev tensorrt-libs tensorrt
If you want to upgrade to the latest version of TensorRT or the newest version of CUDA, you can unhold the packages using the following command.
sudo apt-mark unhold libnvinfer-bin libnvinfer-dev libnvinfer-dispatch-dev libnvinfer-dispatch10 libnvinfer-headers-dev libnvinfer-headers-plugin-dev libnvinfer-lean-dev libnvinfer-lean10 libnvinfer-plugin-dev libnvinfer-plugin10 libnvinfer-samples libnvinfer-vc-plugin-dev libnvinfer-vc-plugin10 libnvinfer10 libnvonnxparsers-dev libnvonnxparsers10 python3-libnvinfer-dev python3-libnvinfer-dispatch python3-libnvinfer-lean python3-libnvinfer tensorrt-dev tensorrt-libs tensorrt
RPM Installation#
This section contains instructions for installing TensorRT from an RPM package. This installation method is for new users or users who want the complete installation, including samples and documentation for both the C++ and Python APIs.
For advanced users already familiar with TensorRT and want to get their application running quickly or to set up automation, follow the installation instructions for the network repo (refer to Using The NVIDIA CUDA Network Repo For RPM Installation).
Note
Before issuing the commands, you must replace
rhelx
,10.x.x
, andcuda-x.x
with your specific OS, TensorRT, and CUDA versions.When installing Python packages using this method, you must manually install dependencies with
pip
.
Prerequisites
Ensure that you have the following dependencies installed.
CUDA
cuDNN 8.9.7 (Optional and not required for lean or dispatch runtime installations.)
Installation
Install CUDA according to the CUDA installation instructions.
Download the TensorRT local repo file that matches the RHEL/CentOS version and CPU architecture you are using.
Install TensorRT from the local repo RPM package.
os="rhelx" tag="10.x.x-cuda-x.x" sudo rpm -Uvh nv-tensorrt-local-repo-${os}-${tag}-1.0-1.x86_64.rpm sudo yum clean expire-cache
Package Type Install# Package Type
Command
For the full C++ and Python runtimes
sudo yum install tensorrt
For the lean runtime only, instead of
tensorrt
sudo yum install libnvinfer-lean10 sudo yum install libnvinfer-vc-plugin10
For lean runtime Python package
sudo yum install python3-libnvinfer-lean
For the dispatch runtime only, instead of
tensorrt
sudo yum install libnvinfer-dispatch10 sudo yum install libnvinfer-vc-plugin10
For dispatch runtime Python package
sudo yum install python3-libnvinfer-dispatch
For all TensorRT Python packages without samples
python3 -m pip install numpy sudo yum install python3-libnvinfer-devel
The following additional packages will be installed:
python3-libnvinfer
python3-libnvinfer-lean
python3-libnvinfer-dispatch
If you want to run samples that require
onnx-graphsurgeon
or use the Python module for your project.python3 -m pip install numpy onnx onnx-graphsurgeon
Note
For Rocky Linux or RHEL 8.x users, be aware that the TensorRT Python bindings will only be installed for Python 3.8 due to package dependencies and for better Python support. If your default
python3
is version 3.6, you may need to useupdate-alternatives
to switch to Python version 3.8 by default, invoke Python usingpython3.8
, or removepython3.6
packages if they are no longer required. If you require Python modules for a Python version that is not the system’s default version, you should install the*.whl
files directly from the tar package.Verify the installation.
Package Type Verification# Package Type
Command
You should see something similar to the following
For the full TensorRT release
rpm -q tensorrt
tensorrt-10.9.0.x-1.cuda12.8.x86_64
For the lean runtime or the dispatch runtime only
rpm -qa | grep nvinfer
You should see all related
libnvinfer*
files you installed.
Using The NVIDIA CUDA Network Repo For RPM Installation#
This installation method is for advanced users already familiar with TensorRT and who want to get their application running quickly or set up automation. New users or users who want the complete installation, including samples and documentation, should follow the local repo installation instructions (refer to RPM Installation).
Note
If you are using a CUDA container, then the NVIDIA CUDA network repository will already be set up, and you can skip step 1.
Follow the CUDA Toolkit Download page instructions to install the CUDA network repository.
Select the Linux operating system.
Select the desired architecture.
Select the CentOS, RHEL, or Rocky distribution.
Select the desired CentOS, RHEL, or Rocky version.
Select the rpm (network) installer type.
Enter the commands provided into your terminal.
If you do not require the entire CUDA Toolkit, you can omit the final
yum/dnf install
command. While installing TensorRT,yum/dnf
automatically downloads the required CUDA dependencies.Install the TensorRT package that fits your particular needs. When using the NVIDIA CUDA network repository, RHEL will, by default, install TensorRT for the latest CUDA version. If you need the libraries for other CUDA versions, refer to step 3.
Package Type Install# Package Type
Command
For the lean runtime only
sudo yum install libnvinfer-lean10
For the lean runtime Python package
sudo yum install python3-libnvinfer-lean
For the dispatch runtime only
sudo yum install libnvinfer-dispatch10
For the dispatch runtime Python package
sudo yum install python3-libnvinfer-dispatch
For only running TensorRT C++ applications
sudo yum install tensorrt-libs
For also building TensorRT C++ applications
sudo yum install tensorrt-devel
For also building TensorRT C++ applications with lean only
sudo yum install libnvinfer-lean-devel
For also building TensorRT C++ applications with dispatch only
sudo yum install libnvinfer-dispatch-devel
For the standard runtime Python package
python3 -m pip install numpy sudo yum install python3-libnvinfer
If you require additional Python modules
If your application requires other Python modules, such as
onnx-graphsurgeon
, then usepip
to install them. Refer to onnx-graphsurgeon · PyPI for additional information.The following commands install
tensorrt
and related TensorRT packages for an older CUDA version and hold these packages at this version. Replace10.x.x.x
with your version of TensorRT andcudax.x
with your CUDA version for your installation.version="10.x.x.x-1.cudax.x" sudo yum install libnvinfer-bin-${version} libnvinfer-devel-${version} libnvinfer-dispatch-devel-${version} libnvinfer-dispatch10-${version} libnvinfer-headers-devel-${version} libnvinfer-headers-plugin-devel-${version} libnvinfer-lean-devel-${version} libnvinfer-lean10-${version} libnvinfer-plugin-devel-${version} libnvinfer-plugin10-${version} libnvinfer-samples-${version} libnvinfer-vc-plugin-devel-${version} libnvinfer-vc-plugin10-${version} libnvinfer10-${version} libnvonnxparsers-devel-${version} libnvonnxparsers10-${version} python3-libnvinfer-${version} python3-libnvinfer-devel-${version} python3-libnvinfer-dispatch-${version} python3-libnvinfer-lean-${version} tensorrt-${version} tensorrt-devel-${version} tensorrt-libs-${version} sudo yum install yum-plugin-versionlock sudo yum versionlock libnvinfer-bin libnvinfer-devel libnvinfer-dispatch-devel libnvinfer-dispatch10 libnvinfer-headers-devel libnvinfer-headers-plugin-devel libnvinfer-lean-devel libnvinfer-lean10 libnvinfer-plugin-devel libnvinfer-plugin10 libnvinfer-samples libnvinfer-vc-plugin-devel libnvinfer-vc-plugin10 libnvinfer10 libnvonnxparsers-devel libnvonnxparsers10 python3-libnvinfer python3-libnvinfer-devel python3-libnvinfer-dispatch python3-libnvinfer-lean tensorrt tensorrt-devel tensorrt-libs
If you want to upgrade to the latest version of TensorRT or the newest version of CUDA, you can unhold the packages using the following command.
sudo yum versionlock delete libnvinfer-bin libnvinfer-devel libnvinfer-dispatch-devel libnvinfer-dispatch10 libnvinfer-headers-devel libnvinfer-headers-plugin-devel libnvinfer-lean-devel libnvinfer-lean10 libnvinfer-plugin-devel libnvinfer-plugin10 libnvinfer-samples libnvinfer-vc-plugin-devel libnvinfer-vc-plugin10 libnvinfer10 libnvonnxparsers-devel libnvonnxparsers10 python3-libnvinfer python3-libnvinfer-devel python3-libnvinfer-dispatch python3-libnvinfer-lean tensorrt tensorrt-devel tensorrt-libs
Tar File Installation#
This section contains instructions for installing TensorRT from a tar file.
Prerequisites
Ensure that you have the following dependencies installed.
CUDA
cuDNN 8.9.7 (Optional)
Python 3 (Optional)
Installation
Download the TensorRT tar file that matches the CPU architecture and CUDA version you are using.
Choose where you want to install TensorRT. This tar file will install everything into a subdirectory called
TensorRT-10.x.x.x
.Unpack the tar file.
version="10.x.x.x" arch=$(uname -m) cuda="cuda-x.x" tar -xzvf TensorRT-${version}.Linux.${arch}-gnu.${cuda}.tar.gz
Where:
10.x.x.x
is your TensorRT versioncuda-x.x
is CUDA version11.8
or12.8
This directory will have sub-directories like
lib
,include
,data
, etc.ls TensorRT-${version} bin data doc include lib python samples targets
Add the absolute path to the TensorRT
lib
directory to the environment variableLD_LIBRARY_PATH
:export LD_LIBRARY_PATH=<TensorRT-${version}/lib>:$LD_LIBRARY_PATH
Install the Python TensorRT wheel file (replace
cp3x
with the desired Python version, for example,cp310
for Python 3.10).cd TensorRT-${version}/python python3 -m pip install tensorrt-*-cp3x-none-linux_x86_64.whl
Optionally, install the TensorRT lean and dispatch runtime wheel files:
python3 -m pip install tensorrt_lean-*-cp3x-none-linux_x86_64.whl python3 -m pip install tensorrt_dispatch-*-cp3x-none-linux_x86_64.whl
Verify the installation.
Ensure that the installed files are located in the correct directories. For example, run the
tree -d
command to check whether all supported installed files are in place in thelib
,include
,data
, and so on directories.Build and run one of the shipped samples, sampleOnnxMNIST, in the installed directory. You should be able to compile and execute the sample without additional settings. For more information, refer to sampleOnnxMNIST.
The Python samples are in the
samples/python
directory.
Zip File Installation#
This section contains instructions for installing TensorRT from a zip package on Windows.
Prerequisites
Ensure that you have the following dependencies installed.
CUDA
cuDNN 8.9.7 (Optional)
Installation
Download the TensorRT zip file for Windows.
Choose where you want to install TensorRT. This zip file will install everything into a subdirectory called
TensorRT-10.x.x.x
. This new subdirectory will be called<installpath>
in the steps below.Unzip the
TensorRT-10.x.x.x.Windows.win10.cuda-x.x.zip
file to the location that you chose.Where:
10.x.x.x
is your TensorRT versioncuda-x.x
is CUDA version11.8
or12.8
Add the TensorRT library files to your system
PATH
. There are two ways to accomplish this task:Leave the DLL files where they were unzipped and add
<installpath>/lib
to your systemPATH
. You can add a new path to your systemPATH
using the steps below.Press the Windows key and search for environment variables. You should then be able to click Edit the System Environment Variables.
Click Environment Variables… at the bottom of the window.
Under System variables, select Path and click Edit….
Click either New or Browse to add a new item that contains
<installpath>/lib
.Continue to click OK until all the newly opened windows are closed.
Copy the DLL files from
<installpath>/lib
to your CUDA installation directory, for example,C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.Y\bin
, wherevX.Y
is your CUDA version. The CUDA installer should have already added the CUDA path to your systemPATH
.
Install one of the TensorRT Python wheel files from
<installpath>/python
(replacecp3x
with the desired Python version, for example,cp310
for Python 3.10):python.exe -m pip install tensorrt-*-cp3x-none-win_amd64.whl
Optionally, install the TensorRT lean and dispatch runtime wheel files:
python.exe -m pip install tensorrt_lean-*-cp3x-none-win_amd64.whl python.exe -m pip install tensorrt_dispatch-*-cp3x-none-win_amd64.whl
Verify the installation.
Open a Visual Studio Solution file from one of the samples, such as sampleOnnxMNIST, and confirm that you can build and run the sample.
If you want to use TensorRT in your project, ensure that the following is present in your Visual Studio Solution project properties:
<installpath>/lib
has been added to yourPATH
variable and is present under VC++ Directories > Executable Directories.<installpath>/include
is present under C/C++ > General > Additional Directories.nvinfer.lib
and any otherLIB
files your project requires are present under Linker > Input > Additional Dependencies.
Note
You should install Visual Studio 2019 or later to build the included samples. The community edition is sufficient to build the TensorRT samples.
Additional Installation Methods#
Aside from installing TensorRT from the product package, you can also install TensorRT from the following locations:
- NVIDIA NIM
For developing AI-powered enterprise applications and deploying AI models in production. Refer to the NVIDIA NIM technical blog post for more information.
- TensorRT container
The TensorRT container provides an easy method for deploying TensorRT with all necessary dependencies already packaged in the container. For information about installing TensorRT using a container, refer to the NVIDIA TensorRT Container Release Notes.
- NVIDIA JetPack
Bundles all Jetson platform software, including TensorRT. Use it to flash your Jetson Developer Kit with the latest OS image, install NVIDIA SDKs, and jumpstart your development environment. For information about installing TensorRT through JetPack, refer to the JetPack documentation. For JetPack downloads, refer to the Develop: JetPack.
- DRIVE OS Linux Standard
For step-by-step instructions on installing TensorRT, refer to the NVIDIA DRIVE Platform Installation section with NVIDIA SDK Manager. The safety proxy runtime is not installed by default in the NVIDIA DRIVE OS Linux SDK. To install it on this platform, refer to the DRIVE OS Installation Guide.
Cross-Compile Installation#
If you intend to cross-compile TensorRT for AArch64, start with the Using The NVIDIA CUDA Network Repo For Debian Installation section to set up the network repository and TensorRT for the host. Steps to prepare your machine for cross-compilation and instructions for cross-compiling the TensorRT samples can be found in Cross Compiling Samples.