1 # Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
3 @page dwx_tensorRT_tool TensorRT Optimizer Tool
5 This tool enables optimization of a given Caffe model using TensorRT. For more
6 information, see the <em>NVIDIA DriveWorks Release Notes</em>.
8 ./tensorRT_optimization
10 ### Command Line Options ###
12 The following lists the required and optional command line arguments.
14 #### Required Arguments ####
15 - `--prototxt`: Deploy file that describes the Caffe network (e.g.,
16 `--prototxt=deploy.prototxt`)
17 - `--caffemodel`: Caffemodel file that contain weights (e.g.,
18 `--caffemodel=weights.caffemodel`)
19 - `--outputBlobs`: Names of output blobs combined with a comma (e.g.,
20 `--outputBlobs=bboxes,coverage`)
22 #### Optional Arguments ####
23 - `--iterations`: Number of iterations to run to measure speed (e.g.,
24 `--iterations=100`. Default = 10)
25 - `--batchSize`: Batchsize of the model to be generated (e.g., `--batchSize=2`.
27 - `--half2`: The network runs in paired fp16 mode. (e.g., `--half2=1`.
28 Default: 0) NOTE: Requires platform to support native fp16.
29 - `--inputBlobs`: Names of input blobs combined with a comma (e.g.,
30 `--inputBlobs=data`. Default: data)
31 - `--out`: Name of the optimized model file (e.g., `--out=model.bin`. Default:
33 - `--int8`: Run in INT8 mode (e.g., `--int8=1`. Default: 0)
34 - `--calib`: INT8 calibration file name (e.g., `--calib=calib.cache`)
36 @note This tool creates output files that, by default, are put into the current
37 working directory. Hence, write permissions to the current working directory are
38 necessary. For convenience, NVIDIA suggests that you:
39 - Include the tools folder in the binary search path of the system and
40 - Execute from the your home directory.