nvidia.dali.fn.decoders.image#
- nvidia.dali.fn.decoders.image(__input, /, *, affine=True, bytes_per_sample_hint=[0], cache_batch_copy=True, cache_debug=False, cache_size=0, cache_threshold=0, cache_type='', device_memory_padding=16777216, device_memory_padding_jpeg2k=0, host_memory_padding=8388608, host_memory_padding_jpeg2k=0, hw_decoder_load=0.65, hybrid_huffman_threshold=1000000, jpeg_fancy_upsampling=False, memory_stats=False, output_type=DALIImageType.RGB, preallocate_height_hint=0, preallocate_width_hint=0, preserve=False, seed=-1, use_fast_idct=False, device=None, name=None)#
Decodes images.
For jpeg images, depending on the backend selected (“mixed” and “cpu”), the implementation uses the nvJPEG library or libjpeg-turbo, respectively. Other image formats are decoded with OpenCV or other specific libraries, such as libtiff.
If used with a
mixed
backend, and the hardware is available, the operator will use a dedicated hardware decoder.Warning
Due to performance reasons, hardware decoder is disabled for driver < 455.x
The output of the decoder is in HWC layout.
Supported formats: JPG, BMP, PNG, TIFF, PNM, PPM, PGM, PBM, JPEG 2000, WebP. Please note that GPU acceleration for JPEG 2000 decoding is only available for CUDA 11 and newer.
Note
WebP decoding currently only supports the simple file format (lossy and lossless compression). For details on the different WebP file formats, see https://developers.google.com/speed/webp/docs/riff_container
Note
EXIF orientation metadata is disregarded.
- Supported backends
‘cpu’
‘mixed’
- Parameters:
__input (TensorList) – Input to the operator.
- Keyword Arguments:
affine (bool, optional, default = True) –
Applies only to the
mixed
backend type.If set to True, each thread in the internal thread pool will be tied to a specific CPU core. Otherwise, the threads can be reassigned to any CPU core by the operating system.
bytes_per_sample_hint (int or list of int, optional, default = [0]) –
Output size hint, in bytes per sample.
If specified, the operator’s outputs residing in GPU or page-locked host memory will be preallocated to accommodate a batch of samples of this size.
cache_batch_copy (bool, optional, default = True) –
Applies only to the
mixed
backend type.If set to True, multiple images from the cache are copied with a batched copy kernel call. Otherwise, unless the order in the batch is the same as in the cache, each image is copied with
cudaMemcpy
.cache_debug (bool, optional, default = False) –
Applies only to the
mixed
backend type.Prints the debug information about the decoder cache.
cache_size (int, optional, default = 0) –
Applies only to the
mixed
backend type.Total size of the decoder cache in megabytes. When provided, the decoded images that are larger than
cache_threshold
will be cached in GPU memory.cache_threshold (int, optional, default = 0) –
Applies only to the
mixed
backend type.The size threshold, in bytes, for decoded images to be cached. When an image is cached, it no longer needs to be decoded when it is encountered at the operator input saving processing time.
cache_type (str, optional, default = ‘’) –
Applies only to the
mixed
backend type.Here is a list of the available cache types:
threshold
: caches every image with a size that is larger thancache_threshold
untilthe cache is full.The warm-up time for threshold policy is 1 epoch.
largest
: stores the largest images that can fit in the cache.The warm-up time for largest policy is 2 epochsNote
To take advantage of caching, it is recommended to configure readers with stick_to_shard=True to limit the amount of unique images seen by each decoder instance in a multi node environment.
device_memory_padding (int, optional, default = 16777216) –
Applies only to the
mixed
backend type.The padding for nvJPEG’s device memory allocations, in bytes. This parameter helps to avoid reallocation in nvJPEG when a larger image is encountered, and the internal buffer needs to be reallocated to decode the image.
If a value greater than 0 is provided, the operator preallocates one device buffer of the requested size per thread. If the value is correctly selected, no additional allocations will occur during the pipeline execution. One way to find the ideal value is to do a complete run over the dataset with the
memory_stats
argument set to True and then copy the largest allocation value that was printed in the statistics.device_memory_padding_jpeg2k (int, optional, default = 0) –
Applies only to the
mixed
backend type.The padding for nvJPEG2k’s device memory allocations, in bytes. This parameter helps to avoid reallocation in nvJPEG2k when a larger image is encountered, and the internal buffer needs to be reallocated to decode the image.
If a value greater than 0 is provided, the operator preallocates the necessary number of buffers according to the hint provided. If the value is correctly selected, no additional allocations will occur during the pipeline execution. One way to find the ideal value is to do a complete run over the dataset with the
memory_stats
argument set to True and then copy the largest allocation value that was printed in the statistics.host_memory_padding (int, optional, default = 8388608) –
Applies only to the
mixed
backend type.The padding for nvJPEG’s host memory allocations, in bytes. This parameter helps to prevent the reallocation in nvJPEG when a larger image is encountered, and the internal buffer needs to be reallocated to decode the image.
If a value greater than 0 is provided, the operator preallocates two (because of double-buffering) host-pinned buffers of the requested size per thread. If selected correctly, no additional allocations will occur during the pipeline execution. One way to find the ideal value is to do a complete run over the dataset with the
memory_stats
argument set to True, and then copy the largest allocation value that is printed in the statistics.host_memory_padding_jpeg2k (int, optional, default = 0) –
Applies only to the
mixed
backend type.The padding for nvJPEG2k’s host memory allocations, in bytes. This parameter helps to prevent the reallocation in nvJPEG2k when a larger image is encountered, and the internal buffer needs to be reallocated to decode the image.
If a value greater than 0 is provided, the operator preallocates the necessary number of buffers according to the hint provided. If the value is correctly selected, no additional allocations will occur during the pipeline execution. One way to find the ideal value is to do a complete run over the dataset with the
memory_stats
argument set to True, and then copy the largest allocation value that is printed in the statistics.hw_decoder_load (float, optional, default = 0.65) –
The percentage of the image data to be processed by the HW JPEG decoder.
Applies only to the
mixed
backend type in NVIDIA Ampere GPU and newer architecture.Determines the percentage of the workload that will be offloaded to the hardware decoder, if available. The optimal workload depends on the number of threads that are provided to the DALI pipeline and should be found empirically. More details can be found at https://developer.nvidia.com/blog/loading-data-fast-with-dali-and-new-jpeg-decoder-in-a100
hybrid_huffman_threshold (int, optional, default = 1000000) –
Applies only to the
mixed
backend type.Images with a total number of pixels (
height * width
) that is higher than this threshold will use the nvJPEG hybrid Huffman decoder. Images that have fewer pixels will use the nvJPEG host-side Huffman decoder.Note
Hybrid Huffman decoder still largely uses the CPU.
jpeg_fancy_upsampling (bool, optional, default = False) –
Make the
mixed
backend use the same chroma upsampling approach as thecpu
one.The option corresponds to the JPEG fancy upsampling available in libjpegturbo or ImageMagick.
memory_stats (bool, optional, default = False) –
Applies only to the
mixed
backend type.Prints debug information about nvJPEG allocations. The information about the largest allocation might be useful to determine suitable values for
device_memory_padding
andhost_memory_padding
for a dataset.Note
The statistics are global for the entire process, not per operator instance, and include the allocations made during construction if the padding hints are non-zero.
output_type (
nvidia.dali.types.DALIImageType
, optional, default = DALIImageType.RGB) –The color space of the output image.
Note: When decoding to YCbCr, the image will be decoded to RGB and then converted to YCbCr, following the YCbCr definition from ITU-R BT.601.
preallocate_height_hint (int, optional, default = 0) –
Image width hint.
Applies only to the
mixed
backend type in NVIDIA Ampere GPU and newer architecture.The hint is used to preallocate memory for the HW JPEG decoder.
preallocate_width_hint (int, optional, default = 0) –
Image width hint.
Applies only to the
mixed
backend type in NVIDIA Ampere GPU and newer architecture.The hint is used to preallocate memory for the HW JPEG decoder.
preserve (bool, optional, default = False) – Prevents the operator from being removed from the graph even if its outputs are not used.
seed (int, optional, default = -1) –
Random seed.
If not provided, it will be populated based on the global seed of the pipeline.
split_stages (bool) –
Warning
The argument
split_stages
is no longer used and will be removed in a future release.use_chunk_allocator (bool) –
Warning
The argument
use_chunk_allocator
is no longer used and will be removed in a future release.use_fast_idct (bool, optional, default = False) –
Enables fast IDCT in the libjpeg-turbo based CPU decoder, used when
device
is set to “cpu” or when the it is set to “mixed” but the particular image can not be handled by the GPU implementation.According to the libjpeg-turbo documentation, decompression performance is improved by up to 14% with little reduction in quality.
See also