nvidia.dali.fn.jitter#
- nvidia.dali.fn.jitter(__input, /, *, bytes_per_sample_hint=[0], fill_value=0.0, interp_type=DALIInterpType.INTERP_NN, mask=1, nDegree=2, preserve=False, seed=-1, device=None, name=None)#
Performs a random Jitter augmentation.
The output images are produced by moving each pixel by a random amount, in the x and y dimensions, and bounded by half of the
nDegree
parameter.- Supported backends
‘gpu’
- Parameters:
__input¶ (TensorList ('HWC')) – Input to the operator.
- Keyword Arguments:
bytes_per_sample_hint¶ (int or list of int, optional, default = [0]) –
Output size hint, in bytes per sample.
If specified, the operator’s outputs residing in GPU or page-locked host memory will be preallocated to accommodate a batch of samples of this size.
fill_value¶ (float, optional, default = 0.0) – Color value that is used for padding.
interp_type¶ (
nvidia.dali.types.DALIInterpType
, optional, default = DALIInterpType.INTERP_NN) – Type of interpolation used.mask¶ (int or TensorList of int, optional, default = 1) –
Determines whether to apply this augmentation to the input image.
Here are the values:
0: Do not apply this transformation.
1: Apply this transformation.
nDegree¶ (int, optional, default = 2) – Each pixel is moved by a random amount in the
[-nDegree/2, nDegree/2]
rangepreserve¶ (bool, optional, default = False) – Prevents the operator from being removed from the graph even if its outputs are not used.
seed¶ (int, optional, default = -1) – Random seed; if not set, one will be assigned automatically.