TensorFlow Plugin API reference

class nvidia.dali.plugin.tf.DALIDataset(pipeline='', batch_size=1, num_threads=4, device_id=0, exec_separated=False, prefetch_queue_depth=2, cpu_prefetch_queue_depth=2, gpu_prefetch_queue_depth=2, shapes=[], dtypes=[])

Creates a DALIDataset compatible with tf.data.Dataset from a DALI pipeline. It supports TensorFlow 1.13, 1.14, 1.15 and 2.0

Please keep in mind that TensorFlow allocates almost all available device memory by default. This might cause errors in DALI due to insufficient memory. On how to change this behaviour please look into the TensorFlow documentation, as it may differ based on your use case.

Parameters
  • pipeline (nvidia.dali.Pipeline) – defining the augmentations to be performed.

  • batch_size (int) – batch size of the pipeline.

  • num_threads (int) – number of CPU threads used by the pipeline.

  • device_id (int) – id of GPU used by the pipeline.

  • exec_separated (bool) – Whether to execute the pipeline in a way that enables overlapping CPU and GPU computation, typically resulting in faster execution speed, but larger memory consumption.

  • prefetch_queue_depth (int) – depth of the executor queue. Deeper queue makes DALI more resistant to uneven execution time of each batch, but it also consumes more memory for internal buffers. Value will be used with exec_separated set to False.

  • cpu_prefetch_queue_depth (int) – depth of the executor cpu queue. Deeper queue makes DALI more resistant to uneven execution time of each batch, but it also consumes more memory for internal buffers. Value will be used with exec_separated set to True.

  • gpu_prefetch_queue_depth (int) – depth of the executor gpu queue. Deeper queue makes DALI more resistant to uneven execution time of each batch, but it also consumes more memory for internal buffers. Value will be used with exec_separated set to True.

  • shapes (List of tuples) – expected output shapes

  • dtypes (List of tf.DType) – expected output types

Returns

Return type

DALIDataset object based on DALI pipeline and compatible with tf.data.Dataset API.

nvidia.dali.plugin.tf.DALIIterator()

TF Plugin Wrapper

This operator works in the same way as DALI TensorFlow plugin, with the exception that is also accepts Pipeline objects as the input and serializes it internally. For more information, please look TensorFlow Plugin API reference in the documentation.

nvidia.dali.plugin.tf.DALIIteratorWrapper(pipeline=None, serialized_pipeline=None, sparse=[], shapes=[], dtypes=[], batch_size=-1, prefetch_queue_depth=2, **kwargs)

TF Plugin Wrapper

This operator works in the same way as DALI TensorFlow plugin, with the exception that is also accepts Pipeline objects as the input and serializes it internally. For more information, please look TensorFlow Plugin API reference in the documentation.

nvidia.dali.plugin.tf.DALIRawIterator()

DALI TensorFlow plugin

Creates a Dali pipeline for classification tasks from serialized DALI pipeline (given in serialized_pipeline parameter). shapes must match the shape of the coresponding DALI Pipeline output tensor shape. dtypes must match the type of the coresponding DALI Pipeline output tensors type.

Parameters
  • serialized_pipeline – A string.

  • shapes – A list of shapes (each a tf.TensorShape or list of ints) that has length >= 1.

  • dtypes – A list of tf.DTypes from: tf.half, tf.float32, tf.uint8, tf.int16, tf.int32, tf.int64 that has length >= 1.

  • num_threads – An optional int. Defaults to -1.

  • device_id – An optional int. Defaults to -1.

  • exec_separated – An optional bool. Defaults to False.

  • gpu_prefetch_queue_depth – An optional int. Defaults to 2.

  • cpu_prefetch_queue_depth – An optional int. Defaults to 2.

  • sparse – An optional list of bools. Defaults to [].

  • batch_size – An optional int. Defaults to -1.

  • name – A name for the operation (optional).

Returns

A list of Tensor objects of type dtypes.

Please keep in mind that TensorFlow allocates almost all available device memory by default. This might cause errors in DALI due to insufficient memory. On how to change this behaviour please look into the TensorFlow documentation, as it may differ based on your use case.