nvidia.dali.plugin.pytorch.fn.torch_python_function#

nvidia.dali.plugin.pytorch.fn.torch_python_function(
*input,
function,
batch_processing=True,
bytes_per_sample_hint=[0],
num_outputs=1,
output_layouts=None,
preserve=False,
seed=-1,
device=None,
name=None,
)#

Executes a function that is operating on Torch tensors.

This class is analogous to nvidia.dali.fn.python_function() but the tensor data is handled as PyTorch tensors.

This operator allows sequence inputs and supports volumetric data.

This operator will not be optimized out of the graph.

Supported backends
  • ‘cpu’

  • ‘gpu’

Parameters:

__input_[0..255] (TensorList, optional) – This function accepts up to 256 optional positional inputs

Keyword Arguments:
  • function (object) – Function object.

  • batch_processing (bool, optional, default = True) – Determines whether the function gets an entire batch as an input.

  • bytes_per_sample_hint (int or list of int, optional, default = [0]) –

    Output size hint, in bytes per sample.

    If specified, the operator’s outputs residing in GPU or page-locked host memory will be preallocated to accommodate a batch of samples of this size.

  • num_outputs (int, optional, default = 1) – Number of outputs.

:keyword output_layouts : layout str or list of layout str, optional: Tensor data layouts for the outputs.

This argument can be a list that contains a distinct layout for each output. If the list has fewer than num_outputs elements, only the first outputs have the layout set and the rest of the outputs have no layout assigned.

Keyword Arguments:
  • preserve (bool, optional, default = False) – Prevents the operator from being removed from the graph even if its outputs are not used.

  • seed (int, optional, default = -1) –

    Random seed.

    If not provided, it will be populated based on the global seed of the pipeline.