nvidia.dali.experimental.dynamic.reductions.variance#

nvidia.dali.experimental.dynamic.reductions.variance(data, mean, /, *, batch_size=None, device=None, axes=None, axis_names=None, ddof=None, keep_dims=None)#

Gets variance of elements along provided axes.

Supported backends
  • ‘cpu’

  • ‘gpu’

Parameters:
  • data (Tensor/Batch) – Input to the operator.

  • mean (float or Tensor/Batch of float) – Mean value to use in the calculations.

Keyword Arguments:
  • axes (int or list of int, optional) –

    Axis or axes along which reduction is performed.

    Accepted range is [-ndim, ndim-1]. Negative indices are counted from the back.

    Not providing any axis results in reduction of all elements.

  • axis_names (layout str, optional) –

    Name(s) of the axis or axes along which the reduction is performed.

    The input layout is used to translate the axis names to axis indices, for example axis_names="HW" with input layout “FHWC” is equivalent to specifying axes=[1,2]. This argument cannot be used together with axes.

  • ddof (int, optional, default = 0) – Delta Degrees of Freedom. Adjusts the divisor used in calculations, which is N - ddof.

  • keep_dims (bool, optional, default = False) – If True, maintains original input dimensions.