PaddlePaddle Plugin API reference¶
-
class
nvidia.dali.plugin.paddle.
DALIClassificationIterator
(pipelines, size=- 1, reader_name=None, auto_reset=False, fill_last_batch=True, dynamic_shape=False, last_batch_padded=False)¶ DALI iterator for classification tasks for Paddle. It returns 2 outputs (data and label) in the form of LoDTensor.
Calling
DALIClassificationIterator(pipelines, size)
is equivalent to calling
DALIGenericIterator(pipelines, ["data", "label"], size)
Please keep in mind that Tensors returned by the iterator are still owned by DALI. They are valid till the next iterator call. If the content needs to be preserved please copy it to another tensor.
- Parameters
pipelines (list of nvidia.dali.pipeline.Pipeline) – List of pipelines to use
size (int, default = -1) – Number of samples in the shard for the wrapped pipeline (if there is more than one it is a sum) Providing -1 means that the iterator will work until StopIteration is raised from the inside of iter_setup(). The options fill_last_batch, last_batch_padded and auto_reset don’t work in such case. It works with only one pipeline inside the iterator. Mutually exclusive with reader_name argument
reader_name (str, default = None) – Name of the reader which will be queried to the shard size, number of shards and all other properties necessary to count properly the number of relevant and padded samples that iterator needs to deal with. It automatically sets fill_last_batch and last_batch_padded accordingly to match the reader’s configuration
auto_reset (bool, optional, default = False) – Whether the iterator resets itself for the next epoch or it requires reset() to be called separately.
fill_last_batch (bool, optional, default = True) – Whether to return a fraction of a full batch of data such that the total entries returned by the iterator == ‘size’. Setting this flag to False will cause the iterator to return the first integer multiple
dynamic_shape (bool, optional, default = False) – Whether the shape of the output of the DALI pipeline can change during execution. If True, the LoDtensor will be resized accordingly if the shape of DALI returned tensors changes during execution. If False, the iterator will fail in case of change.
last_batch_padded (bool, optional, default = False) – Whether the last batch provided by DALI is padded with the last sample or it just wraps up. In the conjunction with fill_last_batch it tells if the iterator returning last batch with data only partially filled with data from the current epoch is dropping padding samples or samples from the next epoch. If set to False next epoch will end sooner as data from it was consumed but dropped. If set to True next epoch would be the same length as the first one. For this to happen, the option
pad_last_batch
in the reader needs to be set to True as well. It is overwritten when reader_name argument is provided
Example
With the data set
[1,2,3,4,5,6,7]
and the batch size 2:fill_last_batch = False, last_batch_padded = True -> last batch =
[7]
, next iteration will return[1, 2]
fill_last_batch = False, last_batch_padded = False -> last batch =
[7]
, next iteration will return[2, 3]
fill_last_batch = True, last_batch_padded = True -> last batch =
[7, 7]
, next iteration will return[1, 2]
fill_last_batch = True, last_batch_padded = False -> last batch =
[7, 1]
, next iteration will return[2, 3]
-
next
()¶ Returns the next batch of data.
-
reset
()¶ Resets the iterator after the full epoch. DALI iterators do not support resetting before the end of the epoch and will ignore such request.
-
property
size
¶
-
class
nvidia.dali.plugin.paddle.
DALIGenericIterator
(pipelines, output_map, size=- 1, reader_name=None, auto_reset=False, fill_last_batch=True, dynamic_shape=False, last_batch_padded=False)¶ General DALI iterator for Paddle. It can return any number of outputs from the DALI pipeline in the form of Paddle’s Tensors.
Please keep in mind that Tensors returned by the iterator are still owned by DALI. They are valid till the next iterator call. If the content needs to be preserved please copy it to another tensor.
- Parameters
pipelines (list of nvidia.dali.pipeline.Pipeline) – List of pipelines to use
output_map (list of str or pair of type (str, int)) – The strings maps consecutive outputs of DALI pipelines to user specified name. Outputs will be returned from iterator as dictionary of those names. Each name should be distinct. Item can also be a pair of (str, int), where the int value specifies the LoD level of the resulting LoDTensor.
size (int, default = -1) – Number of samples in the shard for the wrapped pipeline (if there is more than one it is a sum) Providing -1 means that the iterator will work until StopIteration is raised from the inside of iter_setup(). The options fill_last_batch, last_batch_padded and auto_reset don’t work in such case. It works with only one pipeline inside the iterator. Mutually exclusive with reader_name argument
reader_name (str, default = None) – Name of the reader which will be queried to the shard size, number of shards and all other properties necessary to count properly the number of relevant and padded samples that iterator needs to deal with. It automatically sets fill_last_batch and last_batch_padded accordingly to match the reader’s configuration
auto_reset (bool, optional, default = False) – Whether the iterator resets itself for the next epoch or it requires reset() to be called separately.
fill_last_batch (bool, optional, default = True) – Whether to return a fraction of a full batch of data such that the total entries returned by the iterator == ‘size’. Setting this flag to False will cause the iterator to return the first integer multiple of self._num_gpus * self.batch_size which exceeds ‘size’.
dynamic_shape (bool, optional, default = False) – Whether the shape of the output of the DALI pipeline can change during execution. If True, the LoDTensor will be resized accordingly if the shape of DALI returned tensors changes during execution. If False, the iterator will fail in case of change.
last_batch_padded (bool, optional, default = False) – Whether the last batch provided by DALI is padded with the last sample or it just wraps up. In the conjunction with fill_last_batch it tells if the iterator returning last batch with data only partially filled with data from the current epoch is dropping padding samples or samples from the next epoch. If set to False next epoch will end sooner as data from it was consumed but dropped. If set to True next epoch would be the same length as the first one. For this to happen, the option
pad_last_batch
in the reader needs to be set to True as well. It is overwritten when reader_name argument is provided
Example
With the data set
[1,2,3,4,5,6,7]
and the batch size 2:fill_last_batch = False, last_batch_padded = True -> last batch =
[7]
, next iteration will return[1, 2]
fill_last_batch = False, last_batch_padded = False -> last batch =
[7]
, next iteration will return[2, 3]
fill_last_batch = True, last_batch_padded = True -> last batch =
[7, 7]
, next iteration will return[1, 2]
fill_last_batch = True, last_batch_padded = False -> last batch =
[7, 1]
, next iteration will return[2, 3]
-
next
()¶ Returns the next batch of data.
-
reset
()¶ Resets the iterator after the full epoch. DALI iterators do not support resetting before the end of the epoch and will ignore such request.
-
property
size
¶
-
nvidia.dali.plugin.paddle.
feed_ndarray
(dali_tensor, ptr, cuda_stream=None)¶ Copy contents of DALI tensor to Paddle’s Tensor.
- Parameters
dali_tensor (dali.backend.TensorCPU or dali.backend.TensorGPU) – Tensor from which to copy
ptr (LoDTensor data pointer) – Destination of the copy
cuda_stream (cudaStream_t handle or any value that can be cast to cudaStream_t) – CUDA stream to be used for the copy (if not provided, an internal user stream will be selected)
-
nvidia.dali.plugin.paddle.
lod_tensor_clip
(lod_tensor, size)¶
-
nvidia.dali.plugin.paddle.
recursive_length
(tensor, lod_level)¶
-
nvidia.dali.plugin.paddle.
to_paddle_type
(tensor)¶ Get paddle dtype for given tensor or tensor list
- Parameters
tensor – tensor or tensor list
Returns: fluid.core.VarDesc.VarType