TensorRT API Capture and Replay#
TensorRT API Capture and Replay streamlines the process of reproducing and debugging issues within your applications. It allows you to record the engine-building phase of an application and later replay the engine-building steps, without needing to re-run the original application or access the model’s source code.
This process is facilitated by two key components:
Capture Shim (libtensorrt_shim.so
): This is a library that you can preload or drop into your application. It works by intercepting all TensorRT API calls made during the network-build phase. These intercepted calls, along with any associated constants, are then saved as a pair of files: a JSON file for the API calls and a BIN file for the constants.
Player (tensorrt_player
): This is a standalone executable that takes the recorded JSON and BIN files generated by the Capture Shim and uses them to rebuild the TensorRT engine. This means you can recreate the engine that was built during the original application run, differing only in details related to timing differences during auto-tuning, aiding significantly in troubleshooting and debugging. Or, use it to recreate an engine-build failure.
Getting Started#
The feature is currently restricted to Linux. There are two ways to run the capture step.
Capture using
LD_PRELOAD
, however, if the user application usesdlopen
anddlsym
to load the TensorRT library and map its c-functions (exposed by extern C) to the process address space, then you must use the drop-in replacement approach.To capture trtexec use the drop-in replacement approach. Capturing the TensorRT Python API and Python ONNX parser does not require the drop-in replacement approach. The Capture Shim is implemented in a separate library which is installed as part of
libnvifer-dev
.
Capture using LD_PRELOAD
export TRT_SHIM_NVINFER_LIB_NAME=<path to libnvinfer.so> [optional]
export TRT_SHIM_OUTPUT_JSON_FILE=<output JSON path>
LD_PRELOAD=libtensorrt_shim.so <your application's command-line>
Drop-in replacement
In this approach, we replace the libnvinfer.so
being loaded by the app with the libtensorrt_shim.so
(we overwrite it), and point the shim using an environment variable to load the original TensorRT library.
mv <path to the TRT lib that the app loads>/libnvinfer.so.<major version> libnvinfer_orig.so.<major version>
cp build/x86_64-gnu/libtensorrt_shim.so <path to the TRT lib that the app loads>/libnvinfer.so.<major version>
TRT_SHIM_OUTPUT_JSON_FILE=<JSON file path> TRT_SHIM_NVINFER_LIB_NAME=<path to the original libnvinfer.so>/libnvinfer_orig.so.<major version>
Player
tensorrt_player -j <output JSON file> -o <output engine file>
When running the player, set LD_PRELOAD
to the plugin library path to load it.
Capture Tool Configuration#
Environment Variables |
Description |
---|---|
|
Path to save the captured JSON file. |
Environment Variables |
Description |
Type |
Default Value |
---|---|---|---|
|
Intercepted TensorRT library name. If unset, |
|
|
|
Print |
|
|
|
Print |
|
|
|
Lock every API call to enforce single-threaded execution. Ignored when |
|
|
|
Inline weights into the JSON instead of a separate |
|
|
|
Skip saving weights with an element count ≥ this threshold (they will be marked as random). |
|
|
|
Flush captured calls to the file after every API call instead of aggregating them. |
|
|
|
Path to a tactic-cache file that will be loaded and applied to TensorRT’s |
|
|
Known Limitations#
Capturing custom layers is limited to the following:
Supports only Linux x86_64 in TensorRT release 10.13.3.
PluginV2
onlyRegistered statically, by calling
REGISTER_TENSORRT_PLUGIN
. Dynamic registration viaregisterCreator()
is not supported.Supported C++ plugins only. Python plugins are not supported.
Plugin must be shipped externally, and not be part of the engine (that is,
config→setPluginsToSerialize
is not supported).Capturing more than one network in a single process isn’t supported.
trtexec --saveEngine
flag is not supported.buildSerializedNetworkToStream
andbuildSerializedNetworkWithKernelText
functions are not captured.