Cybersecurity Disclosures#

  • Developers that provide deep learning models with their application to be accelerated with TensorRT-RTX (for example, by storing them in the ONNX file format) are responsible for safeguarding the confidentiality of their model if such confidentiality is needed for IP protection. This may be achieved by using appropriate encryption techniques.

  • Even if the original model specification has been encrypted, TensorRT-RTX saves its inference plan in a serialized TensorRT-RTX engine file and it may be possible to reverse-engineer the original model from the engine file.

  • Developers that provide deep learning models with their application to be accelerated with TensorRT-RTX (for example, by storing them in the ONNX file format) are responsible for safeguarding the integrity of their model if the accuracy of the inference outputs is critical for their use case. This may be achieved by using an appropriate digital signature scheme. However, note that TensorRT-RTX makes no accuracy guarantees and that inference outputs may differ from the model outputs observed during training and validation due to quantization, rounding and other numerical inaccuracies.