Modulus Deploy

Core (Latest Release)

Decorator to check if ONNX runtime is installed

modulus.deploy.onnx.utils.export_to_onnx_stream(model: Module, invars: Union[Tensor, Tuple[Tensor, ...]], verbose: bool = False) → bytes[source]

Exports PyTorch model to byte stream instead of a file

  • model (nn.Module) – PyTorch model to export

  • invars (Union[Tensor, Tuple[Tensor,...]]) – Input tensor(s)

  • verbose (bool, optional) – Print out a human-readable representation of the model, by default False


ONNX model byte stream

Return type



Exporting a ONNX model while training when using CUDA graphs will likely break things. Because model must be copied to the CPU and back for export.


ONNX exporting can take a longer time when using custom ONNX functions.

Previous Modulus Metrics
Next Modulus Distributed
© Copyright 2023, NVIDIA Modulus Team. Last updated on Apr 19, 2024.