Jetson Linux Multimedia API Reference

32.4.2 Release

 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
CAFFE to TensorRT Model Tool


NVIDIA® TensorRT is an accelerated neural network inference engine and run-time library. ConvertCaffeToTrtModel is a standalone model conversion tool that converts a CAFFE network to a TensorRT compatible model. This tool runs offline on the NVIDIA® Jetson platform and provides a cached TensorRT model stream to prevent subsequent repetitive network conversion. Using this converted model, TensorRT-based applications can improve greatly in accuracy and performance.

If the source model changes (i.e., is retrained), the tool performs conversion again to enable TensorRT accelerated inference.

Building and Running


  • You have followed Steps 1-3 in Building and Running.
  • If you are building from your host Linux PC (x86), you have followed Step 4 in Building and Running.
  • You have installed the TensorRT package.
  • You have a trained a deep-learning network.

To build:


To run

  • Enter:

    ./ConvertCaffeToTrtModel -n ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt \ -l ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel \ -m detection -o coverage,bboxes -f fp16 -b 2 -w 115343360 -s trtModel.cache

To get a list of supported options

  • Use the -h option.

Key Structure and Classes

The CudaEngine structure is a TensorRT interface that invokes the TensorRT function.

The sample uses the following function:

Function Description
caffeToTRTModel Uses TensorRT API to transfer a network model from CAFFE to TensorRT.