Installing TensorRT-RTX#
TensorRT-RTX can be installed from an SDK zip file on Windows or a tarball on Linux. Before proceeding, ensure you have met all Prerequisites.
Windows SDK Installation#
Decompress the Windows zip package to extract the following:
Core library and ONNX parser DLLs and import libraries
Development headers
Source code samples
Documentation
Python bindings
The
tensorrt_rtxexecutable for building engines and running inference from the command lineLicensing information and open-source acknowledgments
Note
The DLLs are signed and verified with the
signtoolutility. Remember to add the directories containing the DLLs and executable files to yourPATHenvironment variable.Optionally, install the Python bindings.
$version = "1.4.0.0" # Replace with newest version $arch = "amd64" # Replace with your architecture $pyversion = "311" # For Python 3.11, replace with your # Python version $wheel = "TensorRT-RTX-$version\tensorrt_rtx-$version-cp$pyversion-none-win_$arch.whl" python3 -m pip install $wheel
Linux Tarball Installation#
TensorRT-RTX can be installed from a tarball package on Linux. For supported distributions and CUDA requirements, refer to the Prerequisites page.
Download the repo file that matches your operating system version and CPU architecture.
Unzip the tarball, optionally adding the path of the executable to your PATH variable and the path of the library to your
LD_LIBRARY_PATHvariable.version = "1.4.0.0" # Replace with newest version arch = "x86_64" # Replace with your architecture cuda = "12.9" # Replace with your CUDA version tarfile = "TensorRT-RTX-${version}.Linux.${arch}-gnu-${cuda}.tar.gz" tar -xzf $tarfile export PATH=$PATH:$PWD/TensorRT-RTX-${version}/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PWD/TensorRT-RTX-${version}/lib
Optionally, install the Python bindings.
pyversion = "311" # Assuming Python 3.11, else replace with your # Python version wheel = "tensorrt_rtx-${version}-cp${pyversion}-none-linux_${arch}.whl" python3 -m pip install TensorRT-RTX-${version}/python/${wheel}
Next Steps#
After installation, verify your setup and deploy your first model:
Deploy Your First Model — Build an engine from an ONNX model and run inference
ONNX Conversion Guide — Export your own model to ONNX from PyTorch, TensorFlow, or Hugging Face