nvidia.dali.fn.to_decibels#
- nvidia.dali.fn.to_decibels(__input, /, *, bytes_per_sample_hint=[0], cutoff_db=-200.0, multiplier=10.0, preserve=False, reference=0.0, seed=-1, device=None, name=None)#
Converts a magnitude (real, positive) to the decibel scale.
Conversion is done according to the following formula:
min_ratio = pow(10, cutoff_db / multiplier) out[i] = multiplier * log10( max(min_ratio, input[i] / reference) )
- Supported backends
‘cpu’
‘gpu’
- Parameters:
__input (TensorList) – Input to the operator.
- Keyword Arguments:
bytes_per_sample_hint (int or list of int, optional, default = [0]) –
Output size hint, in bytes per sample.
If specified, the operator’s outputs residing in GPU or page-locked host memory will be preallocated to accommodate a batch of samples of this size.
cutoff_db (float, optional, default = -200.0) –
Minimum or cut-off ratio in dB.
Any value below this value will saturate. For example, a value of
cutoff_db=-80
corresponds to a minimum ratio of1e-8
.multiplier (float, optional, default = 10.0) – Factor by which the logarithm is multiplied. The value is typically 10.0 or 20.0, which depends on whether the magnitude is squared.
preserve (bool, optional, default = False) – Prevents the operator from being removed from the graph even if its outputs are not used.
reference (float, optional, default = 0.0) –
Reference magnitude.
If a value is not provided, the maximum value for the input will be used as reference.
Note
The maximum of the input will be calculated on a per-sample basis.
seed (int, optional, default = -1) –
Random seed.
If not provided, it will be populated based on the global seed of the pipeline.
See also