nvidia.dali.experimental.dynamic.readers.Caffe#

class nvidia.dali.experimental.dynamic.readers.Caffe(max_batch_size=None, name=None, device='cpu', num_inputs=None, *, dont_use_mmap=None, image_available=None, initial_fill=None, label_available=None, lazy_init=None, num_shards=None, pad_last_batch=None, path, prefetch_queue_depth=None, random_shuffle=None, read_ahead=None, seed=None, shard_id=None, skip_cached_images=None, stick_to_shard=None, tensor_init_bytes=None)#
__init__(max_batch_size=None, name=None, device='cpu', num_inputs=None, *, dont_use_mmap=None, image_available=None, initial_fill=None, label_available=None, lazy_init=None, num_shards=None, pad_last_batch=None, path, prefetch_queue_depth=None, random_shuffle=None, read_ahead=None, seed=None, shard_id=None, skip_cached_images=None, stick_to_shard=None, tensor_init_bytes=None)#

Reads (Image, label) pairs from a Caffe LMDB.

Supported backends
  • ‘cpu’

Keyword Arguments:
  • dont_use_mmap (bool, optional, default = False) –

    If set to True, the Loader will use plain file I/O instead of trying to map the file in memory.

    Mapping provides a small performance benefit when accessing a local file system, but most network file systems, do not provide optimum performance.

  • image_available (bool, optional, default = True) – Determines whether an image is available in this LMDB.

  • initial_fill (int, optional, default = 1024) –

    Size of the buffer that is used for shuffling.

    If random_shuffle is False, this parameter is ignored.

  • label_available (bool, optional, default = True) – Determines whether a label is available.

  • lazy_init (bool, optional, default = False) – Parse and prepare the dataset metadata only during the first run instead of in the constructor.

  • num_shards (int, optional, default = 1) –

    Partitions the data into the specified number of parts (shards).

    This is typically used for multi-GPU or multi-node training.

  • pad_last_batch (bool, optional, default = False) –

    If set to True, pads the shard by repeating the last sample.

    Note

    If the number of batches differs across shards, this option can cause an entire batch of repeated samples to be added to the dataset.

  • path (str or list of str) – List of paths to the Caffe LMDB directories.

  • prefetch_queue_depth (int, optional, default = 1) –

    Specifies the number of batches to be prefetched by the internal Loader.

    This value should be increased when the pipeline is CPU-stage bound, trading memory consumption for better interleaving with the Loader thread.

  • random_shuffle (bool, optional, default = False) –

    Determines whether to randomly shuffle data.

    A prefetch buffer with a size equal to initial_fill is used to read data sequentially, and then samples are selected randomly to form a batch.

  • read_ahead (bool, optional, default = False) –

    Determines whether the accessed data should be read ahead.

    For large files such as LMDB, RecordIO, or TFRecord, this argument slows down the first access but decreases the time of all of the following accesses.

  • seed (int, optional, default = -1) – Random seed; if not set, one will be assigned automatically.

  • shard_id (int, optional, default = 0) – Index of the shard to read.

  • skip_cached_images (bool, optional, default = False) –

    If set to True, the loading data will be skipped when the sample is in the decoder cache.

    In this case, the output of the loader will be empty.

  • stick_to_shard (bool, optional, default = False) –

    Determines whether the reader should stick to a data shard instead of going through the entire dataset.

    If decoder caching is used, it significantly reduces the amount of data to be cached, but might affect accuracy of the training.

  • tensor_init_bytes (int, optional, default = 1048576) – Hint for how much memory to allocate per image.

next_epoch(batch_size=None, ctx=None)#

Obtains an iterator that goes over the next epoch from the reader.

The return value is an iterator that returns either individual samples (if batch_size is None and was not specified at construction) or batches (if batch_size was specified here or at construction).

This iterator will go over the dataset (or shard, if sharding was specified at construction) once.

Note

The iterator must be traversed completely before the next call to next_epoch is made. Therefore, it is impossible to traverse one reader using two iterators. If another iterator is necessary, create a separate reader instance.