Types#
TensorList#
TensorList represents a batch of tensors. TensorLists are the return values of Pipeline.run()
,
Pipeline.outputs()
or Pipeline.share_outputs()
.
Subsequent invocations of the mentioned functions (or Pipeline.release_outputs()
) invalidate
the TensorList (as well as any DALI Tensors obtained from it) and indicate to DALI
that the memory can be used for something else.
TensorList wraps the outputs of current iteration and is valid only for the duration of the iteration. Using the TensorList after moving to the next iteration is not allowed. If you wish to retain the data you need to copy it before indicating DALI that you released it.
For typical use-cases, for example when DALI is used through DL Framework Plugins, no additional memory bookkeeping is necessary.
TensorListCPU#
- class nvidia.dali.tensors.TensorListCPU#
- __getitem__(self: TensorListCPU, i: int) TensorCPU #
Returns a tensor at given position i in the list.
- __init__(*args, **kwargs)#
Overloaded function.
__init__(self: nvidia.dali.tensors.TensorListCPU, object: capsule, layout: str = ‘’) -> None
List of tensors residing in the CPU memory.
- objectDLPack object
Python DLPack object representing TensorList
- layoutstr
Layout of the data
__init__(self: nvidia.dali.tensors.TensorListCPU, tl: nvidia.dali.tensors.TensorListCPU, layout: object = None) -> None
__init__(self: nvidia.dali.tensors.TensorListCPU, b: Buffer, layout: str = ‘’, is_pinned: bool = False) -> None
List of tensors residing in the CPU memory.
- bobject
the buffer to wrap into the TensorListCPU object
- layoutstr
Layout of the data
- is_pinnedbool
If provided memory is page-locked (pinned)
__init__(self: nvidia.dali.tensors.TensorListCPU, list_of_tensors: list, layout: str = ‘’) -> None
List of tensors residing in the CPU memory.
- list_of_tensors[TensorCPU]
Python list of TensorCPU objects
- layoutstr
Layout of the data
- as_array(self: TensorListCPU) numpy.ndarray #
Returns TensorList as a numpy array. TensorList must be dense.
- as_reshaped_tensor(self: TensorListCPU, arg0: list[int]) TensorCPU #
Returns a tensor that is a view of this TensorList cast to the given shape.
This function can only be called if TensorList is contiguous in memory and the volumes of requested Tensor and TensorList matches.
- as_tensor(self: TensorListCPU) TensorCPU #
Returns a tensor that is a view of this TensorList.
This function can only be called if is_dense_tensor returns True.
- at(self: TensorListCPU, arg0: int) numpy.ndarray #
Returns tensor at given position in the list.
- copy_to_external(self: TensorListCPU, arg0: object) None #
Copy the contents of this TensorList to an external pointer (of type ctypes.c_void_p) residing in CPU memory.
This function is used internally by plugins to interface with tensors from supported Deep Learning frameworks.
- data_ptr(self: TensorListCPU) object #
Returns the address of the first element of TensorList.
- property dtype#
Data type of the TensorListCPU’s elements.
- Type:
- is_dense_tensor(self: TensorListCPU) bool #
Checks whether all tensors in this TensorList have the same shape (and so the list itself can be viewed as a tensor).
For example, if TensorList contains N tensors, each with shape (H,W,C) (with the same values of H, W and C), then the list may be viewed as a tensor of shape (N, H, W, C).
- layout(self: TensorListCPU) str #
- reset(self: TensorListCPU) None #
- shape(self: TensorListCPU) list[tuple] #
Shape of the tensor list.
TensorListGPU#
- class nvidia.dali.tensors.TensorListGPU#
- __getitem__(self: TensorListGPU, i: int) TensorGPU #
Returns a tensor at given position i in the list.
- __init__(*args, **kwargs)#
Overloaded function.
__init__(self: nvidia.dali.tensors.TensorListGPU, object: capsule, layout: str = ‘’) -> None
List of tensors residing in the GPU memory.
- objectDLPack object
Python DLPack object representing TensorList
- layoutstr
Layout of the data
__init__(self: nvidia.dali.tensors.TensorListGPU, tl: nvidia.dali.tensors.TensorListGPU, layout: object = None) -> None
__init__(self: nvidia.dali.tensors.TensorListGPU, list_of_tensors: list, layout: str = ‘’) -> None
List of tensors residing in the GPU memory.
- list_of_tensors[TensorGPU]
Python list of TensorGPU objects
- layoutstr
Layout of the data
__init__(self: nvidia.dali.tensors.TensorListGPU, object: object, layout: str = ‘’, device_id: int = -1) -> None
List of tensors residing in the GPU memory.
- objectobject
Python object that implement CUDA Array Interface
- layoutstr
Layout of the data
- device_idint
Device of where this tensor resides. If not provided, the current device is used.
__init__(self: nvidia.dali.tensors.TensorListGPU) -> None
List of tensors residing in the GPU memory.
- as_cpu(self: TensorListGPU) TensorListCPU #
Returns a TensorListCPU object being a copy of this TensorListGPU.
- as_reshaped_tensor(self: TensorListGPU, arg0: list[int]) TensorGPU #
Returns a tensor that is a view of this TensorList cast to the given shape.
This function can only be called if TensorList is contiguous in memory and the volumes of requested Tensor and TensorList matches.
- as_tensor(self: TensorListGPU) TensorGPU #
Returns a tensor that is a view of this TensorList.
This function can only be called if is_dense_tensor returns True.
- at(self: TensorListGPU, arg0: int) TensorGPU #
Returns a tensor at given position in the list. Deprecated for __getitem__().
- copy_to_external(self: TensorListGPU, ptr: object, cuda_stream: object = None, non_blocking: bool = False, use_copy_kernel: bool = False) None #
Copy the contents of this TensorList to an external pointer residing in GPU memory.
This function is used internally by plugins to interface with tensors from supported Deep Learning frameworks.
- ptrctypes.c_void_p
Destination of the copy.
- cuda_streamctypes.c_void_p
CUDA stream to schedule the copy on (default stream if not provided).
- non_blockingbool
Asynchronous copy.
- data_ptr(self: TensorListGPU) object #
Returns the address of the first element of TensorList.
- device_id(self: TensorListGPU) int #
- property dtype#
Data type of the TensorListGPU’s elements.
- Type:
- is_dense_tensor(self: TensorListGPU) bool #
Checks whether all tensors in this TensorList have the same shape (and so the list itself can be viewed as a tensor).
For example, if TensorList contains N tensors, each with shape (H,W,C) (with the same values of H, W and C), then the list may be viewed as a tensor of shape (N, H, W, C).
- layout(self: TensorListGPU) str #
- reset(self: TensorListGPU) None #
- shape(self: TensorListGPU) list[tuple] #
Shape of the tensor list.
Tensor#
TensorCPU#
- class nvidia.dali.tensors.TensorCPU#
Class representing a Tensor residing in host memory. It can be used to access individual samples of a
TensorListCPU
or used to wrap CPU memory that is intended to be passed as an input to DALI.It is compatible with Python Buffer Protocol and NumPy Array Interface.
- dtype(self: TensorCPU) str #
String representing NumPy type of the Tensor.
Warning
This method is deprecated. Please use
TensorCPU.dtype
instead.
- property __array_interface__#
Returns Array Interface representation of TensorCPU.
- __init__(*args, **kwargs)#
Overloaded function.
__init__(self: nvidia.dali.tensors.TensorCPU, object: capsule, layout: str = ‘’) -> None
Wrap a DLPack Tensor residing in the CPU memory.
- objectDLPack object
Python DLPack object
- layoutstr
Layout of the data
__init__(self: nvidia.dali.tensors.TensorCPU, b: Buffer, layout: str = ‘’, is_pinned: bool = False) -> None
Wrap a Tensor residing in the CPU memory.
- bobject
the buffer to wrap into the TensorListCPU object
- layoutstr
Layout of the data
- is_pinnedbool
If provided memory is page-locked (pinned)
- copy_to_external(self: TensorCPU, ptr: object) None #
Copy to external pointer in the CPU memory.
- ptrctypes.c_void_p
Destination of the copy.
- property dtype#
Data type of the TensorCPU’s elements.
- Type:
TensorGPU#
- class nvidia.dali.tensors.TensorGPU#
Class representing a Tensor residing in GPU memory. It can be used to access individual samples of a
TensorListGPU
or used to wrap GPU memory that is intended to be passed as an input to DALI.It is compatible with CUDA Array Interface.
- dtype(self: TensorCPU) str #
String representing NumPy type of the Tensor.
Warning
This method is deprecated. Please use
TensorGPU.dtype
instead.
- property __cuda_array_interface__#
Returns CUDA Array Interface (Version 2) representation of TensorGPU.
- __init__(*args, **kwargs)#
Overloaded function.
__init__(self: nvidia.dali.tensors.TensorGPU, object: capsule, layout: str = ‘’) -> None
Wrap a DLPack Tensor residing in the GPU memory.
- objectDLPack object
Python DLPack object
- layoutstr
Layout of the data
__init__(self: nvidia.dali.tensors.TensorGPU, object: object, layout: str = ‘’, device_id: int = -1) -> None
Wrap a Tensor residing in the GPU memory that implements CUDA Array Interface.
- objectobject
Python object that implements CUDA Array Interface
- layoutstr
Layout of the data
- device_id: int
Device of where this tensor resides. If not provided, the current device is used.
- copy_to_external(self: TensorGPU, ptr: object, cuda_stream: object = None, non_blocking: bool = False, use_copy_kernel: bool = False) None #
Copy to external pointer in the GPU memory.
- ptrctypes.c_void_p
Destination of the copy.
- cuda_streamctypes.c_void_p
CUDA stream to schedule the copy on (default stream if not provided).
- non_blockingbool
Asynchronous copy.
- property dtype#
Data type of the TensorGPU’s elements.
- Type:
- source_info(self: TensorGPU) str #
- Gets a string descrbing the source of the data in the tensor, e.g. a name of the file
from which the data was loaded.
- squeeze(self: TensorGPU, dim: object = None) bool #
Remove single-dimensional entries from the shape of the Tensor and it returns true if the shape changed or false if it remained unchanged.
- dimint
If specified, it represents the axis of a single dimension to be squeezed.
- property stream#
Data Layouts#
Tensor Layout String format#
DALI uses short strings (Python str type) to describe data layout in tensors, by assigning a character to each of the dimensions present in the tensor shape. For example, shape=(400, 300, 3), layout=”HWC” means that the data is an image with 3 interleaved channels, 400 pixels of height and 300 pixels of width.
For TensorLists, the index in the list is not treated as a dimension (the number of sample in the batch) and is not included in the layout.
Interpreting Tensor Layout Strings#
DALI allows you to process data of different nature (e.g. image, video, audio, volumetric images) as well as different formats (e.g. RGB image in planar configuration vs. interleaved channels). Typically, DALI operators can deal with different data formats and will behave in different way depending on the nature of the input.
While we do not restrict the valid characters to be used in a tensor layout, DALI operators assume a certain naming convention. Here is a list of commonly used dimension names:
Name |
Meaning |
---|---|
H |
Height |
W |
Width |
C |
Channels |
F |
Frames |
D |
Depth |
Here are some examples of typically used layouts:
Layout |
Description |
---|---|
HWC |
Image (interleaved) |
CHW |
Image (planar) |
DHWC |
Volumetric Image (interleaved) |
CDHW |
Volumetric Image (planar) |
FHWC |
Video |
For instance, a crop operation (Crop operator) receiving an input with interleaved layout (“HWC”) will infer that it should crop on the first and second dimensions (H, W). On the other hand, if the input has a planar layout (“CHW”) the crop will take place on the second and third dimensions instead.
Some operators inherently modify the layout of the data (e.g. Transpose), while others propagate the same data layout to the output (e.g. Normalize).
The layout restrictions (if any) for each operator are available through the operator’s documentation.
It is worth to note that the user is responsible to explicitly fill in the layout information when using ExternalSource API.
Constant wrapper#
Constant#
- nvidia.dali.types.Constant(value, dtype=None, shape=None, layout=None, device=None, **kwargs)#
Wraps a constant value which can then be used in
nvidia.dali.Pipeline.define_graph()
pipeline definition step.If the
value
argument is a scalar and neithershape
,layout
nordevice
is provided, the function will return aScalarConstant
wrapper object, which receives special, optimized treatment when used in Mathematical Expressions.Otherwise, the function creates a dali.ops.Constant node, which produces a batch of constant tensors.
- Parameters:
value¶ (bool, int, float, DALIDataType DALIImageType, DALIInterpType,) – a list or tuple thereof or a numpy.ndarray The constant value to wrap. If it is a scalar, it can be used as scalar value in mathematical expressions. Otherwise, it will produce a constant tensor node (optionally reshaped according to
shape
argument). If this argument is is a numpy array, a PyTorch tensor or an MXNet array, the values ofshape
anddtype
will default to value.shape and value.dtype, respectively.dtype¶ (DALIDataType, optional) – Target type of the constant.
shape¶ (list or tuple of int, optional) – Requested shape of the output. If
value
is a scalar, it is broadcast as to fill the requested shape. Otherwise, the number of elements invalue
must match the volume of the shape.layout¶ (string, optional) – A string describing the layout of the constant tensor, e.g. “HWC”
device¶ (string, optional, "cpu" or "gpu") – The device to place the constant tensor in. If specified, it forces the value to become a constant tensor node on given device, regardless of
value
type orshape
.**kwargs¶ (additional keyword arguments) – If present, it forces the constant to become a Constant tensor node and the arguments are passed to the dali.ops.Constant operator
- class nvidia.dali.types.ScalarConstant(value, dtype=None)#
Note
This class should not be instantiated directly; use
Constant()
function with appropriate arguments to create instances of this class.Wrapper for a constant value that can be used in DALI Mathematical Expressions and applied element-wise to the results of DALI Operators representing Tensors in
nvidia.dali.Pipeline.define_graph()
step.ScalarConstant indicates what type should the value be treated as with respect to type promotions. The actual values passed to the backend from python would be int32 for integer values and float32 for floating point values. Python builtin types bool, int and float will be marked to indicate
nvidia.dali.types.DALIDataType.BOOL
,nvidia.dali.types.DALIDataType.INT32
, andnvidia.dali.types.DALIDataType.FLOAT
respectively.- Parameters:
value¶ (bool or int or float) – The constant value to be passed to DALI expression.
dtype¶ (DALIDataType, optional) – Target type of the constant to be used in types promotions.
Enums#
DALIDataType#
- class nvidia.dali.types.DALIDataType#
Object representing the data type of a Tensor.
- NO_TYPE#
- UINT8#
- UINT16#
- UINT32#
- UINT64#
- INT8#
- INT16#
- INT32#
- INT64#
- FLOAT16#
- FLOAT#
- FLOAT64#
- BOOL#
- STRING#
- FEATURE#
- IMAGE_TYPE#
- DATA_TYPE#
- INTERP_TYPE#
- TENSOR_LAYOUT#
- PYTHON_OBJECT#
- nvidia.dali.types.to_numpy_type(dali_type)#
Converts DALIDataType to NumPy type
- Parameters:
dali_type¶ (DALIDataType) – Input type to convert
DALIIterpType#
DALIImageType#
SampleInfo#
- class nvidia.dali.types.SampleInfo(idx_in_epoch, idx_in_batch, iteration, epoch_idx)#
Describes the indices of a sample requested from
nvidia.dali.fn.external_source()
- Variables:
idx_in_epoch – 0-based index of the sample within epoch
idx_in_batch – 0-based index of the sample within batch
iteration – number of current batch within epoch
epoch_idx – number of current epoch
BatchInfo#
- class nvidia.dali.types.BatchInfo(iteration, epoch_idx)#
Describes the batch requested from
nvidia.dali.fn.external_source()
- Variables:
iteration – number of current batch within epoch
epoch_idx – number of current epoch
TensorLayout#
- class nvidia.dali.types.TensorLayout#