This API is experimental and subject to change without notice!
Functional API is designed to simplify the usage of DALI operators in a psuedo-imperative way.
It exposes operators as functions, with ths same name as the operator class, but converted
to snake_case - for example
ops.FileReader will be exposed as
import nvidia.dali as dali pipe = dali.pipeline.Pipeline(batch_size = 3, num_threads = 2, device_id = 0) with pipe: files, labels = dali.fn.file_reader(file_root = "./my_file_root") images = dali.fn.image_decoder(files, device = "mixed") images = dali.fn.rotate(images, angle = dali.fn.uniform(range=(-45,45))) images = dali.fn.resize(images, resize_x = 300, resize_y = 300) pipe.set_outputs(images, labels) pipe.build() outputs = pipe.run()
The use of functional API does not change other aspects of pipeline definition - the functions
still operate on and return
Interoperability with operator objects¶
Functional API is, for the major part, only a wrapper around operator objects - as such, it is inherently compatible with the object-based API. The following example mixes the two, using object API to pre-configure a file reader and a resize operator:
pipe = dali.pipeline.Pipeline(batch_size = 3, num_threads = 2, device_id = 0) reader = dali.ops.FileReader(file_root = ".") resize = dali.ops.Resize(device = "gpu", resize_x = 300, resize_y = 300) with pipe: files, labels = reader() images = dali.fn.image_decoder(files, device = "mixed") images = dali.fn.rotate(images, angle = dali.fn.uniform(range=(-45,45))) images = resize(images) pipe.set_outputs(images, labels) pipe.build() outputs = pipe.run()
external_source(source=None, num_outputs=None, *, cycle=None, name=None, device='cpu', layout=None, cuda_stream=None, use_copy_kernel=None, batch=True, **kwargs)¶
Creates a data node which is populated with data from a Python source. The data can be provided by the
sourcefunction or iterable, or it can be provided by
pipeline.feed_input(name, data, layout, cuda_stream)inside
In the case of the GPU input, it is the user responsibility to modify the provided GPU memory content only using provided stream (DALI schedules a copy on it and all work is properly queued). If no stream is provided feeding input blocks until the provided memory is copied to the internal buffer.
nvidia.dali.ops.ExternalSource()operator is not compatible with TensorFlow integration.
To return a batch of copies of the same tensor, use
nvidia.dali.types.Constant(), which is more performant.
source (callable or iterable) –
The source of the data.
The source is polled for data (via a call
next(source)) when the pipeline needs input for the next iteration. Depending on the value of
num_outputs, the source can supply one or more data items. The data item can be a whole batch (default) or a single batch entry (when
num_outputsis not set, the
sourceis expected to return one item (a batch or a sample). If this value is specified (even if its value is 1), the data is expected to a be tuple, or list, where each element corresponds to respective return value of the external_source.
- The data samples must be in one of the compatible array types:
NumPy ndarray (CPU)
MXNet ndarray (CPU)
PyTorch tensor (CPU or GPU)
CuPy array (GPU)
DALI Tensor object
Batch sources must produce entire batches of data. This can be achieved either by adding a new outermost dimension to an array or by returning a list of arrays (in which case they can be of different size, but must have the same rank and element type). A batch source can also produce a DALI TensorList object, which can be an output of another DALI pipeline.
A per-batch source may accept one positional argument. If it does, it is the index of current iteration within epoch and consecutive calls will be
source(1), and so on.
A per-sample source may accept one positional argument of type
nvidia.dali.types.SampleInfo, which contains index of the sample in current epoch and in the batch, as well as current iteration number.
If the source is a generator function, the function is invoked and treated as an iterable. However, unlike a generator, the function can be used with
cycle. In this case, the function will be called again when the generator reaches the end of iteration.
For GPU inputs, it is a user’s responsibility to modify the provided GPU memory content only in the provided stream. DALI schedules a copy on this stream, and all work is properly queued. If no stream is provided, DALI will use a default, with a best-effort approach at correctness. See the
cuda_streamargument documentation for more information.
num_outputs (int, optional) –
If specified, denotes the number of TensorLists that are produced by the source function.
If set, the operator returns a list of
DataNodeobjects, otherwise a single
DataNodeobject is returned.
- Keyword Arguments
cycle (bool, optional) –
If set to True, the source will be wrapped.
If set to False, StopIteration is raised when the end of data is reached. This flag requires that the
sourceis a collection, for example, an iterable object where
iter(source)returns a fresh iterator on each call or a gensource erator function. In the latter case, the generator function is called again when more data than was yielded by the function is requested.
name (str, optional) –
The name of the data node.
Used when feeding the data in
iter_setupand can be omitted if the data is provided by
layout (layout str or list/tuple thereof, optional) –
If provided, sets the layout of the data.
num_outputs > 1, the layout can be a list that contains a distinct layout for each output. If the list has fewer than
num_outputselements, only the first outputs have the layout set, the rest of the outputs don’t have a layout set.
cudaStream_tor an object convertible to
cudaStream_t, such as
The CUDA stream is used to copy data to the GPU or from a GPU source.
If this parameter is not set, a best-effort will be taken to maintain correctness. That is, if the data is provided as a tensor/array from a recognized library such as CuPy or PyTorch, the library’s current stream is used. Although this approach works in typical scenarios, with advanced use cases, and code that uses unsupported libraries, you might need to explicitly supply the stream handle.
- This argument has two special values:
0 - Use the default CUDA stream
1 - Use DALI’s internal stream
If internal stream is used, the call to
feed_inputwill block until the copy to internal buffer is complete, since there’s no way to synchronize with this stream to prevent overwriting the array with new data in another stream.
use_copy_kernel (bool, optional) –
If set to True, DALI will use a CUDA kernel to feed the data instead of cudaMemcpyAsync (default).
This is applicable only when copying data to and from GPU memory.
blocking (bool, optional) – Determines whether the external source should wait until data is available or just fail when the data is not available.
no_copy (boo, optional) –
Determines whether DALI should copy the buffer when feed_input is called.
If set to True, DALI passes the user memory directly to the pipeline, instead of copying it. It is the user responsibility to keep the buffer alive and unmodified until it is consumed by the pipeline.
The buffer can be modified or freed again after the output of the relevant iterations has been consumed. Effectively, it happens after
cpu_queue_depth * gpu_queue_depth(when they are not equal) iterations following the
The memory location must match the specified
deviceparameter of the operator. For the CPU, the provided memory can be one contiguous buffer or a list of contiguous Tensors. For the GPU, to avoid extra copy, the provided buffer must be contiguous. If you provide a list of separate Tensors, there will be an additional copy made internally, consuming both memory and bandwidth.
batch (bool, optional) – If set to
sourceis expected to produce an entire batch at once. If set to
sourceis called per-sample.