Abstract

This Samples Support Guide provides an overview of all the supported TensorRT 7.2.0 samples included on GitHub and in the product package. The TensorRT samples specifically help in areas such as recommenders, machine translation, character recognition, image classification, and object detection.

For previously released TensorRT developer documentation, see TensorRT Archives.

1. Introduction

The following samples show how to use TensorRT in numerous use cases while highlighting different capabilities of the interface.
Title TensorRT Sample Name Description
trtexec trtexec A tool to quickly utilize TensorRT without having to develop your own application.
“Hello World” For TensorRT sampleMNIST Performs the basic setup and initialization of TensorRT using the Caffe parser.
Building A Simple MNIST Network Layer By Layer sampleMNISTAPI Uses the TensorRT API to build an MNIST (handwritten digit recognition) layer by layer, sets up weights and inputs/outputs and then performs inference.
Importing The TensorFlow Model And Running Inference sampleUffMNIST Imports a TensorFlow model trained on the MNIST dataset.
“Hello World” For TensorRT From ONNX sampleOnnxMNIST Converts a model trained on the MNIST dataset in ONNX format to a TensorRT network.
Building And Running GoogleNet In TensorRT sampleGoogleNet Shows how to import a model trained with Caffe into TensorRT using GoogleNet as an example.
Building An RNN Network Layer By Layer sampleCharRNN Uses the TensorRT API to build an RNN network layer by layer, sets up weights and inputs/outputs and then performs inference.
Performing Inference In INT8 Using Custom Calibration sampleINT8 Performs INT8 calibration and inference. Calibrates a network for execution in INT8.
Performing Inference In INT8 Precision sampleINT8API Sets per tensor dynamic range and computation precision of a layer.
Adding A Custom Layer To Your Network In TensorRT samplePlugin Defines a custom layer that supports multiple data formats that can be serialized and deserialized. Enables a custom layer in NvCaffeParser.
Object Detection With Faster R-CNN sampleFasterRCNN Uses TensorRT plugins, performs inference and implements a fused custom layer for end-to-end inferencing of a Faster R-CNN model.
Object Detection With A TensorFlow SSD Network sampleUffSSD Preprocess the TensorFlow SSD network, performs inference on the SSD network in TensorRT and uses TensorRT plugins to speed up inference.
Movie Recommendation Using Neural Collaborative Filter (NCF) sampleMovieLens An end-to-end sample that imports a trained TensorFlow model and predicts the highest-rated movie for each user.
Movie Recommendation Using MPS (Multi-Process Service) sampleMovieLensMPS An end-to-end sample that imports a trained TensorFlow model and predicts the highest-rated movie for each user using MPS (Multi-Process Service).
Object Detection With SSD sampleSSD Preprocess the input to the SSD network, performs inference on the SSD network in TensorRT, uses TensorRT plugins to speed up inference, and performs INT8 calibration on an SSD network.
“Hello World” For Multilayer Perceptron (MLP) sampleMLP Shows how to create a network that triggers the multi-layer perceptron (MLP) optimizer.
Specifying I/O Formats Using The Reformat Free I/O APIs sampleReformatFreeIO Uses a Caffe model that was trained on theMNIST dataset and performs engine building and inference using TensorRT. The correctness of outputs is then compared to the golden reference.
Adding A Custom Layer That Supports INT8 I/O To Your Network In TensorRT sampleUffPluginV2Ext Demonstrates how to extend INT8 I/O for a plugin that is introduced in TensorRT 6.x.x.
Digit Recognition With Dynamic Shapes In TensorRT sampleDynamicReshape Demonstrates how to use dynamic input dimensions in TensorRT by creating an engine for resizing dynamically shaped inputs to the correct size for an ONNX MNIST model.
Neural Machine Translation (NMT) Using A Sequence To Sequence (seq2seq) Model sampleNMT Demonstrates the implementation of Neural Machine Translation (NMT) based on a TensorFlow seq2seq model using the TensorRT API.
Object Detection And Instance Segmentation With A TensorFlow Mask R-CNN Network sampleUffMaskRCNN Performs inference on the Mask R-CNN network in TensorRT. Mask R-CNN is based on the Mask R-CNN paper which performs the task of object detection and object mask predictions on a target image.
Object Detection With A TensorFlow Faster R-CNN Network sampleUffFasterRCNN Serves as a demo of how to use a pre-trained Faster-RCNN model in Transfer Learning Toolkit to do inference with TensorRT.
Algorithm Selection API Usage Example Based On sampleMNIST In TensorRT sampleAlgorithmSelector End-to-end example of how to use the algorithm selection API based on sampleMNIST.
Introduction To Importing Caffe, TensorFlow And ONNX Models Into TensorRT Using Python introductory_parser_samples Uses TensorRT and its included suite of parsers (the UFF, Caffe and ONNX parsers), to perform inference with ResNet-50 models trained with various different frameworks.
“Hello World” For TensorRT Using TensorFlow And Python end_to_end_tensorflow_mnist An end-to-end sample that trains a model in TensorFlow and Keras, freezes the model and writes it to a protobuf file, converts it to UFF, and finally runs inference using TensorRT.
“Hello World” For TensorRT Using PyTorch And Python network_api_pytorch_mnist An end-to-end sample that trains a model in PyTorch, recreates the network in TensorRT, imports weights from the trained model, and finally runs inference with a TensorRT engine.
Adding A Custom Layer To Your TensorFlow Network In TensorRT In Python uff_custom_plugin Implements a clip layer (as a CUDA kernel) wraps the implementation in a TensorRT plugin (with a corresponding plugin creator), and generates a shared library module containing its code.
Object Detection With The ONNX TensorRT Backend In Python yolov3_onnx Implements a full ONNX-based pipeline for performing inference with the YOLOv3-608 network, including pre and post-processing.
Object Detection With SSD In Python uff_ssd Implements a full UFF-based pipeline for performing inference with an SSD (InceptionV2 feature extractor) network. The sample downloads a trained ssd_inception_v2_coco_2017_11_17 model and uses it to perform inference. Additionally, it superimposes bounding boxes on the input image as a post-processing step.
INT8 Calibration In Python int8_caffe_mnist Demonstrates how to calibrate an engine to run in INT8 mode.
Refitting An Engine In Python engine_refit_mnist Trains an MNIST model in PyTorch, recreates the network in TensorRT with dummy weights, and finally refits the TensorRT engine with weights from the model.
TensorRT Inference Of ONNX Models With Custom Layers In Python onnx_packnet Uses TensorRT to perform inference with a PackNet network. This sample demonstrates use of custom layers in ONNX graph and processing them using ONNX-graphsurgeon API.

1.1. Getting Started With C++ Samples

You can find the C++ samples in the /usr/src/tensorrt/samples package directory as well as on GitHub. The following C++ samples are shipped with TensorRT.

Getting Started With C++ Samples

Every C++ sample includes a README.md file in GitHub that provides detailed information about how the sample works, sample code, and step-by-step instructions on how to run and verify its output.

Running C++ Samples on Linux

If you installed TensorRT using the Debian files, copy /usr/src/tensorrt to a new directory first before building the C++ samples. If you installed TensorRT using the tar file, then the samples are located in {TAR_EXTRACT_PATH}/samples. To build all the samples and then run one of the samples, use the following commands:
$ cd <samples_dir>
$ make -j4
$ cd ../bin
$ ./<sample_bin>

Running C++ Samples on Windows

All of the C++ samples on Windows are provided as Visual Studio Solution files. To build a sample, open its corresponding Visual Studio Solution file and build the solution. The output executable will be generated in (ZIP_EXTRACT_PATH)\bin. You can then run the executable directly or through Visual Studio.

1.2. Getting Started With Python Samples

You can find the Python samples in the /usr/src/tensorrt/samples/python package directory. The following Python samples are shipped with TensorRT.

Getting Started With Python Samples

Every Python sample includes a README.md file. Refer to the /usr/src/tensorrt/samples/python/<sample-name>/README.md file for detailed information about how the sample works, sample code, and step-by-step instructions on how to run and verify its output.

Running Python Samples

To run one of the Python samples, the process typically involves two steps:
  1. Install the sample requirements:
    python<x> -m pip install -r requirements.txt
    
    where python<x> is either python2 or python3.
  2. Run the sample code with the data directory provided if the TensorRT sample data is not in the default location. For example:
    python<x> sample.py [-d DATA_DIR]

For more information on running samples, see the README.md file included with the sample.

2. Cross Compiling Samples For AArch64 Users

The following sections show how to cross-compile TensorRT samples for AArch64 QNX and Linux platforms under x86_64 Linux.

2.1. Prerequisites

This section provides step-by-step instructions to ensure you meet the minimum requirements to cross-compile.

Procedure

  1. Install the CUDA cross-platform toolkit for the corresponding target and set the environment variable CUDA_INSTALL_DIR.
    $ export CUDA_INSTALL_DIR="your cuda install dir"

    Where CUDA_INSTALL_DIR is set to /usr/local/cuda by default.

  2. Install the cuDNN cross-platform libraries for the corresponding target and set the environment variable CUDNN_INSTALL_DIR.
    $ export CUDNN_INSTALL_DIR="your cudnn install dir"

    Where CUDNN_INSTALL_DIR is set to CUDA_INSTALL_DIR by default.

  3. Install the TensorRT cross-compilation Debian packages for the corresponding target.
    Note: If you are using the tar file release for the target platform, then you can safely skip this step. The tar file release already includes the cross-compile libraries so no additional packages are required.
    QNX AArch64
    libnvinfer-dev-cross-qnx, libnvinfer7-cross-qnx
    Linux AArch64
    libnvinfer-dev-cross-aarch64, libnvinfer7-cross-aarch64

2.2. Building Samples For QNX AArch64

This section provides step-by-step instructions to build samples for QNX users.
  1. Download the QNX tool-chain and export the following environment variables.
    $ export QNX_HOST=/path/to/your/qnx/toolchains/host/linux/x86_64
    $ export QNX_TARGET=/path/to/your/qnx/toolchain/target/qnx7
    
  2. Build the samples by issuing:
    $ cd /path/to/TensorRT/samples
    $ make TARGET=qnx
    

2.3. Building Samples For Linux AArch64

This section provides step-by-step instructions to build samples for Linux users.
  1. Install the corresponding GCC compiler, aarch64-linux-gnu-g++. In Ubuntu, this can be installed via:
    $ sudo apt-get install g++-aarch64-linux-gnu
  2. Build the samples by issuing:
    $ cd /path/to/TensorRT/samples
    $ make TARGET=aarch64
    

3. Recommenders

Recommender systems are used to provide product or media recommendations to users of social networking, media content consumption, and e-commerce platforms. MLP-based Neural Collaborative Filter (NCF) recommenders employ a stack of fully-connected or matrix multiplication layers to generate recommendations.

3.1. Movie Recommendation Using Neural Collaborative Filter (NCF)

This sample, sampleMovieLens, is an end-to-end sample that imports a trained TensorFlow model and predicts the highest-rated movie for each user. This sample demonstrates a simple movie recommender system using a multi-layer perceptron (MLP) based Neural Collaborative Filter (NCF) recommender.

What does this sample do?

Specifically, this sample demonstrates how to generate weights for a MovieLens dataset that TensorRT can then accelerate.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleMovieLens directory in the GitHub: sampleMovieLens repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleMovieLens. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleMovieLens.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleMovieLens/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

3.2. Movie Recommendation Using MPS (Multi-Process Service)

This sample, sampleMovieLensMPS, is an end-to-end sample that imports a trained TensorFlow model and predicts the highest-rated movie for each user using MPS (Multi-Process Service).

What does this sample do?

MPS allows multiple CUDA processes to share a single GPU context. With MPS, multiple overlapping kernel execution and memcpy operations from different processes can be scheduled concurrently to achieve maximum utilization. This can be especially effective in increasing parallelism for small networks with low resource utilization such as those primarily consisting of a series of small MLPs.

This sample is identical to Movie Recommendation Using Neural Collaborative Filter (NCF) in terms of functionality but is modified to support concurrent execution in multiple processes. Specifically, this sample demonstrates how to generate weights for a MovieLens dataset that TensorRT can then accelerate.
Note: Currently, sampleMovieLensMPS supports only Linux x86-64 (includes Ubuntu and RedHat) desktop users.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleMovieLensMPS directory in the GitHub: sampleMovieLensMPS repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleMovieLensMPS. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleMovieLensMPS.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleMovieLensMPS/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

3.3. “Hello World” For Multilayer Perceptron (MLP)

This sample, sampleMLP, is a simple hello world example that shows how to create a network that triggers the multilayer perceptron (MLP) optimizer. The generated MLP optimizer can then accelerate TensorRT.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleMLP directory in the GitHub: sampleMLP repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleMLP. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleMLP.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleMLP/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

4. Machine Translation

Machine translation systems are used to translate text from one language to another language. Recurrent neural networks (RNN) are one of the most popular deep learning solutions for machine translation.
Some examples of TensorRT machine translation samples include the following:

4.1. Neural Machine Translation (NMT) Using A Sequence To Sequence (seq2seq) Model

This sample, sampleNMT, demonstrates the implementation of Neural Machine Translation (NMT) based on a TensorFlow seq2seq model using the TensorRT API. The TensorFlow seq2seq model is an open-sourced NMT project that uses deep neural networks to translate text from one language to another language.

What does this sample do?

Specifically, this sample is an end-to-end sample that takes a TensorFlow model, builds an engine, and runs inference using the generated network. The sample is intended to be modular so it can be used as a starting point for your machine translation application.

This sample implements German to English translation using the data that is provided by and trained from the TensorFlow NMT (seq2seq) Tutorial.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleNMT directory in the GitHub: sampleNMT repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleNMT. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleNMT.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleNMT/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

4.2. Building An RNN Network Layer By Layer

This sample, sampleCharRNN, uses the TensorRT API to build an RNN network layer by layer, sets up weights and inputs/outputs and then performs inference.

What does this sample do?

Specifically, this sample creates a CharRNN network that has been trained on the Tiny Shakespeare dataset. For more information about character level modeling, see char-rnn.

TensorFlow has a useful RNN Tutorial which can be used to train a word-level model. Word level models learn a probability distribution over a set of all possible word sequences. Since our goal is to train a char level model, which learns a probability distribution over a set of all possible characters, a few modifications will need to be made to get the TensorFlow sample to work. These modifications can be seen here.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleCharRNN directory in the GitHub: sampleCharRNN repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleCharRNN. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleCharRNN.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleCharRNN/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5. Character Recognition

5.1. “Hello World” For TensorRT

This sample, sampleMNIST, is a simple hello world example that performs the basic setup and initialization of TensorRT using the Caffe parser.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleMNIST directory in the GitHub: sampleMNIST repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleMNIST. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleMNIST.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleMNIST/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.2. Building A Simple MNIST Network Layer By Layer

This sample, sampleMNISTAPI, uses the TensorRT API to build an engine for a model trained on the MNIST dataset.

What does this sample do?

Specifically, it creates the network layer by layer, sets up weights and inputs/outputs, and then performs inference. This sample is similar to sampleMNIST. Both of these samples use the same model weights, handle the same input, and expect similar output.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleMNISTAPI directory in the GitHub: sampleMNISTAPI repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleMNISTAPI. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleMNISTAPI.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleMNISTAPI/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.3. Importing The TensorFlow Model And Running Inference

This sample, sampleUffMNIST, imports a TensorFlow model trained on the MNIST dataset.

What does this sample do?

The MNIST TensorFlow model has been converted to UFF (Universal Framework Format) using the explanation described in Working With TensorFlow.

The UFF is designed to store neural networks as a graph. The NvUffParser that we use in this sample parses the UFF file in order to create an inference engine based on that neural network.

With TensorRT, you can take a TensorFlow trained model, export it into a UFF protobuf file (.uff) using the UFF converter, and import it using the UFF parser.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleUffMNIST directory in the GitHub: sampleUffMNIST repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleUffMNIST. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleUffMNIST.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleUffMNIST/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.4. “Hello World” For TensorRT From ONNX

This sample, sampleOnnxMNIST, converts a model trained on the MNIST in Open Neural Network Exchange (ONNX) format to a TensorRT network and runs inference on the network. ONNX is a standard for representing deep learning models that enables models to be transferred between frameworks.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleOnnxMNIST directory in the GitHub: sampleOnnxMNIST repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleOnnxMNIST. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleOnnxMNIST.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleOnnxMNIST/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.5. Performing Inference In INT8 Using Custom Calibration

This sample, sampleINT8, performs INT8 calibration and inference.

What does this sample do?

Specifically, this sample demonstrates how to perform inference in an 8-bit integer (INT8). INT8 inference is available only on GPUs with compute capability 6.1 or 7.x. After the network is calibrated for execution in INT8, the output of the calibration is cached to avoid repeating the process. You can then reproduce your own experiments with Caffe in order to validate your results on ImageNet networks.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleINT8 directory in the GitHub: sampleINT8 repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleINT8. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleINT8.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleINT8/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.6. Adding A Custom Layer To Your Network In TensorRT

This sample, samplePlugin, defines a custom layer that supports multiple data formats and demonstrates how to serialize/deserialize plugin layers. This sample also demonstrates how to use a fully connected plugin (FCPlugin) as a custom layer and the integration with NvCaffeParser.

Where is this sample located?

This sample is maintained under the samples/opensource/samplePlugin directory in the GitHub: samplePlugin repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/samplePlugin. If using the tar or zip package, the sample is at <extracted_path>/samples/samplePlugin.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: samplePlugin/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.7. Digit Recognition With Dynamic Shapes In TensorRT

This sample, sampleDynamicReshape, demonstrates how to use dynamic input dimensions in TensorRT by creating an engine for resizing dynamically shaped inputs to the correct size for an ONNX MNIST model.

What does this sample do?

This sample creates an engine for resizing an input with dynamic dimensions to a size that an ONNX MNIST model can consume.

Specifically, this sample demonstrates how to:
  • Create a network with dynamic input dimensions to act as a preprocessor for the model
  • Parse an ONNX MNIST model to create a second network
  • Build engines for both networks and start calibration if running in INT8
  • Run inference using both engines
For more information, see Working With Dynamic Shapes in the TensorRT Developer Guide.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleDynamicReshape directory in the GitHub: sampleDynamicReshape repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleDynamicReshape. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleDynamicReshape.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleDynamicReshape/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.8. Specifying I/O Formats Using The Reformat Free I/O APIs

This sample, sampleReformatFreeIO, uses a Caffe model that was trained on the MNIST dataset and performs engine building and inference using TensorRT. The correctness of outputs is then compared to the golden reference.

What does this sample do?

Specifically, this sample shows how to use reformat free I/O APIs to explicitly specify I/O formats to TensorFormat::kLINEAR, TensorFormat::kCHW2 and TensorFormat::kHWC8 for Float16 and INT8 precision.

ITensor::setAllowedFormats is invoked to specify which format is expected to be supported so that the unnecessary reformatting will not be inserted to convert from/to FP32 formats for I/O tensors. BuilderFlag::kSTRICT_TYPES is also assigned to the builder configuration to let the builder choose a reformat free path rather than the fastest path.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleReformatFreeIO directory in the GitHub: sampleReformatFreeIO repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleReformatFreeIO. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleDynamicReformatFreeIO.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleReformatFreeIO/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.9. Adding A Custom Layer That Supports INT8 I/O To Your Network In TensorRT

This sample, sampleUffPluginV2Ext, implements the custom pooling layer for the MNIST model (data/samples/lenet5_custom_pool.uff).

What does this sample do?

Since cuDNN function cudnnPoolingForward with float precision is used to simulate an INT8 kernel, the performance for INT8 precision does not speed up. Nevertheless, the main purpose of this sample is to demonstrate how to extend INT8 I/O for a plugin that is introduced in TensorRT 6.0. This requires the interface replacement from IPlugin/IPluginV2/IPluginV2Ext to IPluginV2IOExt (or IPluginV2DynamicExt if dynamic shape is required).

Where is this sample located?

This sample is maintained under the samples/opensource/sampleUffPluginV2Ext directory in the GitHub: sampleUffPluginV2Ext repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleUffPluginV2Ext. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleUffPluginV2Ext.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: /sampleUffPluginV2Ext/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.10. “Hello World” For TensorRT Using TensorFlow And Python

This sample, end_to_end_tensorflow_mnist, trains a small, fully-connected model on the MNIST dataset and runs inference using TensorRT.

Where Is This Sample Located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/python/end_to_end_tensorflow_mnist. If using the tar or zip package, the sample is at <extracted_path>/samples/python/end_to_end_tensorflow_mnist.

Getting Started:

For more information about getting started, see Getting Started With Python Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/python/end_to_end_tensorflow_mnist/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.11. Refitting An Engine In Python

This sample, engine_refit_mnist, trains an MNIST model in PyTorch, recreates the network in TensorRT with dummy weights, and finally refits the TensorRT engine with weights from the model. Refitting allows us to quickly modify the weights in a TensorRT engine without needing to rebuild.

Where Is This Sample Located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/python/engine_refit_mnist. If using the tar or zip package, the sample is at <extracted_path>/samples/python/engine_refit_mnist.

Getting Started:

For more information about getting started, see Getting Started With Python Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/python/engine_refit_mnist/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.12. INT8 Calibration In Python

This sample, int8_caffe_mnist, demonstrates how to create an INT8 calibrator, build and calibrate an engine for INT8 mode, and finally run inference in INT8 mode.

Where Is This Sample Located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/python/int8_caffe_mnist. If using the tar or zip package, the sample is at <extracted_path>/samples/python/int8_caffe_mnist.

Getting Started:

For more information about getting started, see Getting Started With Python Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/python/int8_caffe_mnist/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.13. “Hello World” For TensorRT Using PyTorch And Python

This sample, network_api_pytorch_mnist, trains a convolutional model on the MNIST dataset and runs inference with a TensorRT engine.

Where Is This Sample Located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/python/network_api_pytorch. If using the tar or zip package, the sample is at <extracted_path>/samples/python/network_api_pytorch.

Getting Started:

For more information about getting started, see Getting Started With Python Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/python/network_api_pytorch_mnist/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.14. Adding A Custom Layer To Your TensorFlow Network In TensorRT In Python

This sample, uff_custom_plugin, demonstrates how to use plugins written in C++ with the TensorRT Python bindings and UFF Parser. This sample uses the MNIST dataset.

Where Is This Sample Located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/python/uff_custom_plugin. If using the tar or zip package, the sample is at <extracted_path>/samples/python/uff_custom_plugin.

Getting Started:

For more information about getting started, see Getting Started With Python Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/python/uff_custom_plugin/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

5.15. Algorithm Selection API Usage Example Based On sampleMNIST In TensorRT

This sample, sampleAlgorithmSelector, shows an example of how to use the algorithm selection API based on sampleMNIST.

What does this sample do?

This sample demonstrates the usage of IAlgorithmSelector to deterministically build TensorRT engines. It also shows the usage of IAlgorithmSelector::selectAlgorithms to define heuristics for selection of algorithms.

This sample uses a Caffe model that was trained on theMNIST dataset.

To verify whether the engine is operating correctly, this sample picks a 28x28 image of a digit at random and runs inference on it using the engine it created. The output of the network is a probability distribution on the digit, showing which digit is likely to be that in the image.

Where is this sample located?

This sample is located in the release product package under the /usr/src/tensorrt/samples/sampleAlgorithmSelector directory. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleAlgorithmSelector. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleAlgorithmSelector.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/sampleAlgorithmSelector/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

6. Image Classification

Image classification is the problem of identifying one or more objects present in an image. Convolutional neural networks (CNN) are a popular choice for solving this problem. They are typically composed of convolution and pooling layers.

6.1. Building And Running GoogleNet In TensorRT

This sample, sampleGoogleNet, demonstrates how to import a model trained with Caffe into TensorRT using GoogleNet as an example.

What does this sample do?

Specifically, this sample builds a TensorRT engine from the saved Caffe model, sets input values to the engine, and runs it.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleGoogleNet directory in the GitHub: sampleGoogleNet repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleGoogleNet. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleGoogleNet.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleGoogleNet/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

6.2. Performing Inference In INT8 Precision

This sample, sampleINT8API, performs INT8 inference without using the INT8 calibrator; using the user-provided per activation tensor dynamic range. INT8 inference is available only on GPUs with compute capability 6.1 or 7.x and supports Image Classification ONNX models such as ResNet-50, VGG19, and MobileNet.

What does this sample do?

Specifically, this sample demonstrates how to:
  • Use nvinfer1::ITensor::setDynamicRange to set per tensor dynamic range
  • Use nvinfer1::ILayer::setPrecison to set computation precision of a layer
  • Use nvinfer1::ILayer::setOutputType to set output tensor data type of a layer
  • Perform INT8 inference without using INT8 calibration

Where is this sample located?

This sample is maintained under the samples/opensource/sampleINT8API directory in the GitHub: sampleINT8API repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleINT8API. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleINT8API.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleINT8API/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

6.3. Introduction To Importing Caffe, TensorFlow And ONNX Models Into TensorRT Using Python

This sample, introductory_parser_samples, is a Python sample that uses TensorRT and its included suite of parsers (UFF, Caffe and ONNX parsers), to perform inference with ResNet-50 models trained with various different frameworks.

Where Is This Sample Located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/python/introductory_parser_samples. If using the tar or zip package, the sample is at <extracted_path>/samples/python/introductory_parser_samples.

Getting Started:

For more information about getting started, see Getting Started With Python Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/python/introductory_parser_samples/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

6.4. TensorRT Inference Of ONNX Models With Custom Layers In Python

This sample, onnx_packnet, uses TensorRT to perform inference with the PackNet network. PackNet is a self-supervised monocular depth estimation network used in autonomous driving.

What does this sample do?

This sample converts the PyTorch graph into ONNX and uses an ONNX-parser included in TensorRT to parse the ONNX graph. The sample also demonstrates how to:
  • Use custom layers (plugins) in an ONNX graph. These plugins can be automatically registered in TensorRT by using REGISTER_TENSORRT_PLUGIN API.
  • Use the ONNX GraphSurgeon (ONNX-GS) API to modify layers or subgraphs in the ONNX graph. For this network, we transform Group Normalization, upsample and pad layers to remove unnecessary nodes for inference with TensorRT.

Where is this sample located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/python/onnx_packnet. If using the tar or zip package, the sample is at <extracted_path>/samples/python/onnx_packnet.

How do I get started?

For more information about getting started, see Getting Started With Python Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/python/onnx_packnet/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

7. Object Detection

Object detection is one of the classic computer vision problems. The task, for a given image, is to detect, classify and localize all objects of interest. For example, imagine that you are developing a self-driving car and you need to do pedestrian detection - the object detection algorithm would then, for a given image, return bounding box coordinates for each pedestrian in an image.

There have been many advances in recent years in designing models for object detection.

7.1. Object Detection With SSD In Python

This sample, uff_ssd, implements a full UFF-based pipeline for performing inference with an SSD (InceptionV2 feature extractor) network.

What Does This Sample Do?

This sample is based on the SSD: Single Shot MultiBox Detector paper. The SSD network, built on the VGG-16 network, performs the task of object detection and localization in a single forward pass of the network. This approach discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple features with different resolutions to naturally handle objects of various sizes.

This sample is based on the TensorFlow implementation of SSD. For more information, download ssd_inception_v2_coco. Unlike the paper, the TensorFlow SSD network was trained on the InceptionV2 architecture using the MSCOCO dataset which has 91 classes (including the background class). The config details of the network can be found here.

Where Is This Sample Located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/python/uff_ssd. If using the tar or zip package, the sample is at <extracted_path>/samples/python/uff_ssd.

Getting Started:

For more information about getting started, see Getting Started With Python Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/python/uff_ssd/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

7.2. Object Detection With The ONNX TensorRT Backend In Python

This sample, yolov3_onnx, implements a full ONNX-based pipeline for performing inference with the YOLOv3 network, with an input size of 608x608 pixels, including pre and post-processing.

What Does This Sample Do?

This sample is based on the YOLOv3-608 paper.
Note: This sample is not supported on Ubuntu 14.04 and older. Additionally, the yolov3_to_onnx.py script does not support Python 3.

Where Is This Sample Located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/python/yolov3_onnx. If using the tar or zip package, the sample is at <extracted_path>/samples/python/yolov2_onnx.

Getting Started:

For more information about getting started, see Getting Started With Python Samples. For specifics about this sample, refer to the /usr/src/tensorrt/samples/python/yolov3_onnx/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

7.3. Object Detection With A TensorFlow SSD Network

This sample, sampleUffSSD, preprocesses a TensorFlow SSD network, performs inference on the SSD network in TensorRT, using TensorRT plugins to speed up inference.

What does this sample do?

This sample is based on the SSD: Single Shot MultiBox Detector paper. The SSD network performs the task of object detection and localization in a single forward pass of the network.

The SSD network used in this sample is based on the TensorFlow implementation of SSD, which actually differs from the original paper, in that it has an inception_v2 backbone. For more information about the actual model, download ssd_inception_v2_coco. The TensorFlow SSD network was trained on the InceptionV2 architecture using the MSCOCO dataset which has 91 classes (including the background class). The config details of the network can be found here.

Where is this sample located?

If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleUffSSD. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleUffSSD.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleUffSSD/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

7.4. Object Detection With Faster R-CNN

This sample, sampleFasterRCNN, uses TensorRT plugins, performs inference, and implements a fused custom layer for end-to-end inferencing of a Faster R-CNN model.

What does this sample do?

Specifically, this sample demonstrates the implementation of a Faster R-CNN network in TensorRT, performs a quick performance test in TensorRT, implements a fused custom layer, and constructs the basis for further optimization, for example using INT8 calibration, user trained network, etc. The Faster R-CNN network is based on the paper Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleFasterRCNN directory in the GitHub: sampleFasterRCNN repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleFasterRCNN. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleFasterRCNN.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleFasterRCNN/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

7.5. Object Detection With SSD

This sample, sampleSSD, performs the task of object detection and localization in a single forward pass of the network.

What does this sample do?

This sample is based on the SSD: Single Shot MultiBox Detector paper. This network is built using the VGG network as a backbone and trained using PASCAL VOC 2007+ 2012 datasets.

Unlike Faster R-CNN, SSD completely eliminates the proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD straightforward to integrate into systems that require a detection component.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleSSD directory in the GitHub: sampleSSD repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleSSD. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleSSD.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleSSD/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

7.6. Object Detection And Instance Segmentation With A TensorFlow Mask R-CNN Network

This sample, sampleUffMaskRCNN, performs inference on the Mask R-CNN network in TensorRT.

What does this sample do?

Mask R-CNN is based on the Mask R-CNN paper which performs the task of object detection and object mask predictions on a target image.

This sample’s model is based on the Keras implementation of Mask R-CNN and its training framework can be found in the Mask R-CNN Github repository. We have verified that the pre-trained Keras model (with backbone ResNet101 + FPN and dataset coco) provided in the v2.0 release can be converted to UFF and consumed by this sample. And, it is also feasible to deploy your customized Mask R-CNN model trained with specific backbone and datasets.

This sample makes use of TensorRT plugins to run the Mask R-CNN model. To use these plugins, the Keras model should be converted to TensorFlow .pb model. Then this .pb model needs to be preprocessed and converted to the UFF model with the help of GraphSurgeon and the UFF utility.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleUffMaskRCNN directory in the GitHub: sampleUffMaskRCNN repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleUffMaskRCNN. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleUffMaskRCNN.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleUffMaskRCNN/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

7.7. Object Detection With A TensorFlow Faster R-CNN Network

This sample, sampleUffFasterRCNN, serves as a demo of how to use the pre-trained Faster-RCNN model in Transfer Learning Toolkit to do inference with TensorRT.

What does this sample do?

This sample is a UFF TensorRT sample for Faster-RCNN in NVIDIA Transfer Learning Toolkit SDK. Besides the sample itself, it also provides two TensorRT plugins: Proposal and CropAndResize to implement the proposal layer and ROIPooling layer as custom layers in the model since TensorRT has no native support for them.

In this sample, we provide a UFF model as a demo. While in the Transfer Learning Toolkit workflow, we can't provide the UFF model. Instead, we can only get the .tlt model during training and the .etlt model after tlt-export. Both of them are encrypted models and the Transfer Learning Toolkit user will use tlt-converter to decrypt the .etlt model and generate a TensorRT engine file in a single step. Therefore, in the Transfer Learning Toolkit workflow, we will consume the TensorRT engine instead of a UFF model. However, this sample can still serve as a demo on how to use the UFF Faster R-CNN model regardless of its format.

Where is this sample located?

This sample is maintained under the samples/opensource/sampleUffFasterRCNN directory in the GitHub: sampleUffFasterRCNN repository. If using the Debian or RPM package, the sample is located at /usr/src/tensorrt/samples/sampleUffFasterRCNN. If using the tar or zip package, the sample is at <extracted_path>/samples/sampleUffFasterRCNN.

How do I get started?

For more information about getting started, see Getting Started With C++ Samples. For specifics about this sample, refer to the GitHub: sampleUffFasterRCNN/README.md file for detailed information about how this sample works, sample code, and step-by-step instructions on how to run and verify its output.

Notices

Notice

This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.

NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.

Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.

NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.

NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.

NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.

No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.

Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.

THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.

VESA DisplayPort

DisplayPort and DisplayPort Compliance Logo, DisplayPort Compliance Logo for Dual-mode Sources, and DisplayPort Compliance Logo for Active Cables are trademarks owned by the Video Electronics Standards Association in the United States and other countries.

HDMI

HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.

OpenCL

OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.

Trademarks

NVIDIA, the NVIDIA logo, and cuBLAS, CUDA, CUDA Toolkit, cuDNN, DALI, DIGITS, DGX, DGX-1, DGX-2, DGX Station, DLProf, GPU, JetPack, Jetson, Kepler, Maxwell, NCCL, Nsight Compute, Nsight Systems, NVCaffe, NVIDIA Ampere GPU architecture, NVIDIA Deep Learning SDK, NVIDIA Developer Program, NVIDIA GPU Cloud, NVLink, NVSHMEM, PerfWorks, Pascal, SDK Manager, T4, Tegra, TensorRT, TensorRT Inference Server, Tesla, TF-TRT, Triton Inference Server, Turing, and Volta are trademarks and/or registered trademarks of NVIDIA Corporation in the United States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

1 This sample is located in the release product package only.