Custom Operations

Modeling frameworks that allow custom operations are partially supported by the TensorRT Inference Server. Custom operations can be added to the server at build time or at server startup and are made available to all models loaded by the server.

TensorRT

TensorRT allows a user to create custom layers which can then be used in TensorRT models. For those models to run in the inference server the custom layers must be available to the server.

To make the custom layers available to the server, the TensorRT custom layer implementations must be compiled into one or more shared libraries which are then loaded into the inference server using LD_PRELOAD. For example, assuming your TensorRT custom layers are compiled into libtrtcustom.so, starting the inference server with the following command makes those custom layers available to all TensorRT models loaded into the server:

$ LD_PRELOAD=libtrtcustom.so trtserver --model-repository=/tmp/models ...

A limitation of this approach is that the custom layers must be managed separately from the model repository itself. And more seriously, if there are custom layer name conflicts across multiple shared libraries there is currently no way to handle it.

TensorFlow

Tensorflow allows users to add custom operations which can then be used in TensorFlow models. By using LD_PRELOAD you can load your custom TensorFlow operations into the inference server. For example, assuming your TensorFlow custom operations are compiled into libtfcustom.so, starting the inference server with the following command makes those operations available to all TensorFlow models loaded into the server:

$ LD_PRELOAD=libtfcustom.so trtserver --model-repository=/tmp/models ...

A limitation of this approach is that the custom operations must be managed separately from the model repository itself. And more seriously, if there are custom layer name conflicts across multiple shared libraries there is currently no way to handle it.

PyTorch

Torchscript allows users to add custom operations which can then be used in Torchscript models. By using LD_PRELOAD you can load your custom C++ operations into the inference server. For example, if you follow the instructions in the pytorch/extension-script repository and your Torchscript custom operations are compiled into (say) libpytcustom.so, starting the inference server with the following command makes those operations available to all PyTorch models loaded into the server:

$ LD_PRELOAD=libpytcustom.so trtserver --model-repository=/tmp/models ...

A limitation of this approach is that the custom operations must be managed separately from the model repository itself. And more seriously, if there are custom layer name conflicts across multiple shared libraries or the handles used to register them in PyTorch there is currently no way to handle it.