Modulus Deploy

modulus.deploy.onnx.utils.check_ort_install(func)[source]

Decorator to check if ONNX runtime is installed

modulus.deploy.onnx.utils.export_to_onnx_stream(model: Module, invars: Union[Tensor, Tuple[Tensor, ...]], verbose: bool = False) → bytes[source]

Exports PyTorch model to byte stream instead of a file

Parameters
  • model (nn.Module) – PyTorch model to export

  • invars (Union[Tensor, Tuple[Tensor,...]]) – Input tensor(s)

  • verbose (bool, optional) – Print out a human-readable representation of the model, by default False

Returns

ONNX model byte stream

Return type

bytes

Note

Exporting a ONNX model while training when using CUDA graphs will likely break things. Because model must be copied to the CPU and back for export.

Note

ONNX exporting can take a longer time when using custom ONNX functions.

Previous Modulus Metrics
Next Modulus Distributed
© Copyright 2023, NVIDIA Modulus Team. Last updated on Apr 19, 2024.