Image Pipeline

The ImagePipeline class loads and transforms data for feeding into the network. This pipeline uses TensorFlow as a backend to create and manage the input graph.

As mentioned in Smart Cache, this can also incorporate caching to achieve faster performance. Additionally, Smart Cache provides more details on SegmentationImagePipelineWithCache and its arguments, and a pipeline example with SegmentationImagePipeline.

ImagePipeline is for internal use. Instead, please use one of the following based on ImagePipeline:

KerasImagePipeline is a version of input pipeline that uses Keras as the backend for managing workers and threads. Please use one of the following classes to access the pipeline:

KerasImagePipeline is ~30% faster than ImagePipeline for classification tasks. They also provide a sampling feature (described below), that allows users to deal with imbalanced data. We recommend using KerasImagePipeline for these tasks.

For segmentation tasks, the advantages vary from +5% to +15% performance. For these tasks, if you have a large amount of small files, KerasImagePipeline is likely to give a significant boost in performance. If you have a small number of big data files, gains will be minor. Experiment with different parameters to find optimal performance.

Note

Classification pipelines additionally require a label_format specified. Details can be found for the label_format configuration here.

This is the file path to the dataset json file that specifies the characteristics of data. In many examples, this file is dataset_0.json. Below is an example:

Copy
Copied!
            

{ "description": "Spleen Segmentation", "labels": { "0": "background", "1": "spleen" }, "licence": "CC-BY-SA 4.0", "modality": { "0": "CT" }, "name": "Spleen", "numTest": 20, "numTraining": 41, "reference": "Memorial Sloan Kettering Cancer Center", "release": "1.0 06/08/2018", "tensorImageSize": "3D", "training": [ { "image": "imagesTr/spleen_29.nii.gz", "label": "labelsTr/spleen_29.nii.gz" }, { "image": "imagesTr/spleen_46.nii.gz", "label": "labelsTr/spleen_46.nii.gz" } ], "validation": [ { "image": "imagesTr/spleen_19.nii.gz", "label": "labelsTr/spleen_19.nii.gz" }, { "image": "imagesTr/spleen_31.nii.gz", "label": "labelsTr/spleen_31.nii.gz" } ] }


Note

DATASET_JSON in envionment.json is typically set as data_list_file_path.

This is the base directory of the image data. For example with the above json, the directory layout can be as follows:

Copy
Copied!
            

/workspace/data/Task09_Spleen_nii/ imagesTr/ spleen_29.nii.gz spleen_46.nii.gz spleen_19.nii.gz spleen_31.nii.gz ... labelsTr/ spleen_29.nii.gz spleen_46.nii.gz spleen_19.nii.gz spleen_31.nii.gz


In this example, the data_file_base_dir should be “/workspace/data/Task09_Spleen_nii”.

Note

DATA_ROOT in envionment.json is typically set as data_file_base_dir.

This is the key to get the list of data files from the dataset json file. In our examples here, the key could be either “training” or “validation”.

Sampling (classification)

Classification datasets are often imbalanced towards certain classes. One way to deal with this is dynamic sampling. Instead of picking the next data item completely randomly, we can pick items based on a modified probability (or weight). By assigning a higher probability to classes with a smaller number of items, we can reduce the adverse affects of class imbalance.

KerasImagePipeline provides a _sampling_ parameter. Possible options are automatic or element. Automatic sampling assigns probabilities based on the total number of items per class. Element sampling allows users to provide a weight parameter in _data_list_file_path_ and uses this weight to pick items. For more details see KerasImagePipeline.

Transforms

At the heart of this library are the transforms that are ready to be plugged in. These are the transforms that you want to do before the data is passed in to the neural network model for training.

Using the “batch_transforms” key in “train” of train_config.json can be done as follows:

Copy
Copied!
            

"batch_transforms": [ { "name": "MergeBatchDims", "args": { "fields": ["image", "label"] } } ],

FastCropByPosNegRatio, CropByPosNegRatio, and CropByPosNegRatioLabelOnly batch the cropped images, so “image_pipeline” previously needed to be configured with batched_by_transforms set to true so the pipeline did not do batching on top of the transform’s batching. Now with “batch_transforms” and “MergeBatchDims”, it is possible to batch with the transform as well as the image pipeline by merging the batches. For example, if FastCropByPosNegRatio has “batch_size” set to 2 and “MergeBatchDims” is used with the ImagePipeline “batch_size” set to 3, the actual effective batch size will be 6.

This can improve training performance as the model can now see a larger variety of inputs from various images at once compared to being forced to have them all come from the same image before.

Other parameters

Other configurable parameters for ImagePipeline are more straightforward with details on the api page.

© Copyright 2020, NVIDIA. Last updated on Feb 2, 2023.