NVIDIA Clara Train 3.1
3.1

ai4med.libs.transforms package

class AcrossChannelSplitter

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

split(img: ai4med.common.medical_image.MedicalImage)
class ArgmaxAcrossChannelsLabelGenerator(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Argmax over channels to produce multi-label segmentation

Parameters

- the MedicalImage to be processed (img) –

generate_label(img: ai4med.common.medical_image.MedicalImage)
class BratsLabelToMultiChannelConverter(class_names: Union[list, tuple], dtype= )

Bases: <a href="#ai4med.libs.transforms.transformer.Transformer">ai4med.libs.transforms.transformer.Transformer</a>

Convert data to multi channels using brats classes. The possible classes are TC (Tumor core), WC (Whole tumor) and ET (Enhancing tumor). For further details, please see the paper “3D MRI brain tumor segmentation using autoencoder regularization”

Parameters

class_names (list) – List of class names to use for creating channels.

Returns

Converted image with new channels based on brats classes.

convert(img: ai4med.common.medical_image.MedicalImage)

Converts to multi channel image using a list of brats classes.

Parameters

img – Input image to convert.

Returns

Converted image with new channels based on brats classes.

class BratsLabelToMultiImageSplitter(class_names, dtype= )

Bases: <a href="#ai4med.libs.transforms.transformer.Transformer">ai4med.libs.transforms.transformer.Transformer</a>

split(img: ai4med.common.medical_image.MedicalImage)
class ChannelRepeater

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

repeat(img: ai4med.common.medical_image.MedicalImage, num_times: int)
class ChannelsFirstConverter

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

convert(img: ai4med.common.medical_image.MedicalImage)
class ChannelsLastConverter

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

convert(img: ai4med.common.medical_image.MedicalImage)
class ClassIndexToMultiChannelConverter(num_indices, translation_list=None, dtype= )

Bases: <a href="#ai4med.libs.transforms.transformer.Transformer">ai4med.libs.transforms.transformer.Transformer</a>

Create new channels from data using class indexes.

Parameters
  • num_indices (int) – Number of class indices.

  • translation_list (list) – List of ints used for creating channels.

Returns

MedicalImage with new channels created from class indices.

convert(img: ai4med.common.medical_image.MedicalImage)

Convert the data in image to multi-channel.

class ClassIndexToMultiImageSplitter(dtype= )

Bases: <a href="#ai4med.libs.transforms.transformer.Transformer">ai4med.libs.transforms.transformer.Transformer</a>

split(img: ai4med.common.medical_image.MedicalImage, num_indices)
class MulticlassPredsProcessoor(use_sigmoid_for_binary=True, use_softmax_for_multiclass=True)

Bases: <a href="#ai4med.libs.transforms.transformer.Transformer">ai4med.libs.transforms.transformer.Transformer</a>

process(predictions, label_format)
softmax(x)
class ClassificationLocationGeneratorCropper(size, batch_size, location_generator, dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Crops fixed sized sub-volumes from image with the centers being determined by the location generator. Different location generators can be used for different algorithms to choose the centers of volumes to be cropped. This cropper will automatically use SymmetricPadder to increase the image to the minimum size if necessary.

Parameters
  • label_image – MedicalImage of the label data. This will be used for finding foreground/background

  • imgs – name of field for label image. This will be used for finding foreground/background

  • size – the size of the crop region e.g. [224,224,128]

  • batch_size – number of samples (crop regions) to take

  • location_generator – transform that picks centers based on image and label data

sample(label_img: ai4med.common.medical_image.MedicalImage, imgs, cache_id)
Parameters
  • label_img (MedicalImage) – label MedicalImage

  • imgs (list) – list of MedicalImages to be sampled from

  • cache_id – DataElementKey.ID from the transform context to uniquely identify the data element

Returns

(label_medical_image, img_medical_images) for MedicalImages that have been processed

class ClassificationRandomSampleLocationGenerator3(size, batch_size, ratio_pos_neg=None)

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Generate valid sample locations based on image with option for specifying foreground ratio. This is the same as RandomSampleLocationGenerator3, except that this also return a list that indicates each center is foreground or not

Parameters
  • size – size of the ROIs to be sampled

  • batch_size – batch size of data

  • ratio_pos_neg (optional) – ratio of total locations generated that have center being foreground

find_random_foreground_or_background(a, is_foreground=False)
find_random_foreground_or_background_naive(a, is_foreground=False)
generate_centers(img: ai4med.common.medical_image.MedicalImage, label_img: ai4med.common.medical_image.MedicalImage)
Parameters
Returns

A list of centers picked from the foreground and background determined by img and label_img in a ratio

specified by ratio_pos_neg

class ContrastAdjuster

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Changes image intensity by gamma.

adjust(img: ai4med.common.medical_image.MedicalImage, gamma)

Changes image intensity by gamma.

Each pixel/voxel intensity is updated as x = ((x-min)/intensity_range)^gamma*intensity_range+min

Parameters
  • img – image to be updated

  • gamma – gamma value

Returns

updated image

class EndPadder

Bases: <a href="#ai4med.libs.transforms.padder.Padder">ai4med.libs.transforms.padder.Padder</a>

Performs padding by appending to the end of the data all on one side for each dimension.

Uses np.pad so in practice, a mode needs to be provided. See numpy.lib.arraypad.pad for additional details.

Parameters
  • img – the MedicalImage to be processed

  • out_size – the size of region of interest at the end of the operation

  • mode – str or function. A portion from numpy.lib.arraypad.pad is copied below.

Copy
Copied!
            

One of the following string values or a user supplied function. 'constant' Pads with a constant value. Default is 0. 'edge' Pads with the edge values of array. 'linear_ramp' Pads with the linear ramp between end_value and the array edge value. 'maximum' Pads with the maximum value of all or part of the vector along each axis. 'mean' Pads with the mean value of all or part of the vector along each axis. 'median' Pads with the median value of all or part of the vector along each axis. 'minimum' Pads with the minimum value of all or part of the vector along each axis. 'reflect' Pads with the reflection of the vector mirrored on the first and last values of the vector along each axis. 'symmetric' Pads with the reflection of the vector mirrored along the edge of the array. 'wrap' Pads with the wrap of the vector along the axis. The first values are used to pad the end and the end values are used to pad the beginning. <function> Padding function, see Notes.

determine_data_pad_width(out_size, data_shape)
pad(img: ai4med.common.medical_image.MedicalImage, out_size, mode: str, **kwargs)
class ExtremePoints

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Calculate Extreme points based on foreground labels.

get_extreme_points(label: ai4med.common.medical_image.MedicalImage, permutation=0, background_index=0)

Get list extreme points based on foreground label. Add permutation if needed.

Parameters
  • label (MedicalImage) – Input image containing label data.

  • permutation (float) – Random permutation amount (Default: 0).

  • background_index (int) – Index of the background label.

Returns

list of extreme points.

class FastROICropper(size, deform=False, rotation=True, rotation_degree=15, scale=True, scale_factor=0.1, pos=1, neg=1, fast_crop=False)

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Fast image data augmentation method (CPU based) by combining volume transform and ROI cropping.

Parameters
  • size – cropped ROI size, e.g., [96, 96, 96].

  • deform – whether to apply 3D deformation.

  • rotation – whether to apply 3D rotation.

  • rotation_degree – the degree of rotation, e.g., 15 means randomly rotate the image/label in a range [-15, +15].

  • scale – whether to apply scaling.

  • scale_factor – the percentage of scaling, e.g., 0.1 means randomly scaling the image/label in a range [-0.1, +0.1].

  • pos – the factor controlling the ratio of positive ROI sampling.

  • neg – the factor controlling the ratio of negative ROI sampling.

Returns

New image and new label

augment_fast_cpu(data, seg, patch_size, patch_center_dist_from_border=30, do_elastic_deform=True, alpha=0.0, 1000.0, sigma=10.0, 13.0, do_rotation=True, angle_x=0, 6.283185307179586, angle_y=0, 6.283185307179586, angle_z=0, 6.283185307179586, do_scale=True, scale=0.75, 1.25, border_mode_data='nearest', border_cval_data=0, order_data=3, border_mode_seg='constant', border_cval_seg=0, order_seg=0, random_crop=True)
augment_fast_cpu_2d(data, seg, patch_size, patch_center_dist_from_border=30, do_elastic_deform=True, alpha=0.0, 1000.0, sigma=10.0, 13.0, do_rotation=True, angle_x=0, 6.283185307179586, angle_y=0, 6.283185307179586, do_scale=True, scale=0.75, 1.25, border_mode_data='nearest', border_cval_data=0, order_data=3, border_mode_seg='constant', border_cval_seg=0, order_seg=0, random_crop=True)
crop(img: ai4med.common.medical_image.MedicalImage, label: ai4med.common.medical_image.MedicalImage)
pad_to_minimal_size(image, out_size, start_dim, end_dim, pad_mode='constant')
class ForegroundObjectCropper(size, pad=20, use_only_one_class=False, keep_classes=False, use_gpu=False, pert=0, dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Crops a sub-volume conforming to the foreground of the label.

Parameters
  • size – resized size of the crop region e.g. [128,128,128]

  • pad – amount of padding around object in millimeters

  • use_only_one_class – if true, one non-zero class is randomly picked to be the foreground while the rest are ignored. Otherwise, all classes are considered foreground

  • keep_classes – if true, keep original label indices in label image (no thresholding). If false, the label indices will be altered and all the classes of the input will be consolidated into one class.

  • use_gpu – if true, use gpu for resizing (currently not yet implemented)

  • pert – maximum magnitude of random perturbation in each dimension added to padding in millimeters

crop(label_img: ai4med.common.medical_image.MedicalImage, imgs)
Parameters
  • label_img – the label MedicalImage to be used to determine foreground and background

  • imgs – list of MedicalImages to be processed

Returns

Cropped label image and imgs

class GeneralCropper(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

General purpose cropper to produce sub-volume region of interest (ROI). Either a center and size must be provided, or alternatively if center and size are not provided, the start and end coordinates of the ROI must be provided. The sub-volume must sit the within original image.

Note: This transform will not work if the crop region is larger than the image itself. For small volumes that may run into this issue, it is important to use something like SymmetricPadder before this transform to bring the image up to the minimum size.

Parameters
  • img – the MedicalImage to be processed

  • roi_center (list or tuple) – voxel coordinates for center of the crop ROI

  • roi_size (list or tuple) – size of the crop ROI

  • roi_start (list or tuple) – voxel coordinates for start of the crop ROI

  • roi_end (list or tuple) – voxel coordinates for end of the crop ROI

crop(img: ai4med.common.medical_image.MedicalImage, roi_center=None, roi_size=None, roi_start=None, roi_end=None)

Produces sub-volume region of interest (ROI). Either a center and size must be provided, or alternatively if roi_center and roi_center are not provided, the roi_start and roi_end coordinates of the ROI must be provided. The sub-volume must sit the within original image.

Parameters
  • img – the MedicalImage to be processed

  • roi_center (list or tuple) – voxel coordinates for center of the crop ROI

  • roi_size (list or tuple) – size of the crop ROI

  • roi_start (list or tuple) – voxel coordinates for start of the crop ROI

  • roi_end (list or tuple) – voxel coordinates for end of the crop ROI

Returns: MedicalImage that has been processed

class GeneralSampler

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

General purpose sampler to produce batched sub-volumes with roi_centers and roi_size. The number of samples will be determined by the length of roi_centers. Sub-volumes must sit within original image, and the output is batched. (currently not used, maybe can remove if GeneralCropper covers everything this does)

Parameters
  • img – the MedicalImage to be processed, non-batched

  • roi_size – the size of ROIs (same for all ROIs)

  • roi_centers – the center points of sample ROIs

sample(img: ai4med.common.medical_image.MedicalImage, roi_centers, roi_size)
class ImageMaker(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Makes new image using extreme points.

make_from_points(img: ai4med.common.medical_image.MedicalImage, points, sigma=0)

Make new image using extreme points.

Parameters
  • img (MedicalImage) – Original image for reference.

  • points (ndarray) – List of extreme points.

  • sigma (float) – Parameter for adding gaussian. Default 0

class IntensitiesNormalizer(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Normalize input based on provided args, using calculated mean and std if not provided (shape of subtrahend and divisor must match. if 0, entire volume uses same subtrahend and divisor, otherwise the shape can have dimension 1 for channels).

Parameters
  • img – the MedicalImage to be processed

  • subtrahend (ndarray) – the amount to subtract by (usually the mean)

  • divisor (ndarray) – the amount to divide by (usually the standard deviation)

normalize(img: ai4med.common.medical_image.MedicalImage, subtrahend=None, divisor=None)

Normalizes the data

Parameters
  • img – the MedicalImage to be processed

  • subtrahend – the amount to subtract by (usually the mean)

  • divisor – the amount to divide by (usually the standard deviation)

Returns

MedicalImage that has been processed

class IntensityRangeScaler

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Apply specific intensity scaling to the whole numpy array scaling from [a_min, a_max] to [b_min, b_max] with clip option

Parameters
  • a_min (int or float) – intensity original range min

  • a_max (int or float) – intensity original range max

  • b_min (int or float) – intensity target range min

  • b_max (int or float) – intensity target range max

  • do_clipping (bool) – whether to perform clip after scaling

scale(img: ai4med.common.medical_image.MedicalImage, a_min, a_max, b_min, b_max, do_clipping=False, to_dtype= )
Parameters
  • img (MedicalImage) – input image to be processed

  • a_min (int or float) – intensity original range min

  • a_max (int or float) – intensity original range max

  • b_min (int or float) – intensity target range min

  • b_max (int or float) – intensity target range max

  • do_clipping (bool) – whether to perform clip after scaling

  • to_dtype (dtype attribute) – data type identifier

Returns

MedicalImage that has been processed

class IntensityScaler

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

multiply the input intensities by a provided scaling factor.

scale(img: ai4med.common.medical_image.MedicalImage, scale_factor, to_dtype= )

multiply the data of img by scale_factor.

Parameters
  • scale_factor – a multiplicative factor

  • to_dtype – output data type

class IntensityScalerShifter

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Perturbs intensity

scale_shift(img: ai4med.common.medical_image.MedicalImage, scale, shift)

Intensity scale and shift Operation based on this formula c = c * (1+scale) + shift * std(c) where scale and shift are range of internal random variables

Parameters
  • scale – increase or decrease the range of intensity

  • shift – move up or down the intensity

class IntensityShifter

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Uniformly shifts intensity values for entire image by offset value.

Parameters

offset – value to shift intensity by

shift(img: ai4med.common.medical_image.MedicalImage, offset, to_dtype= )
Parameters
  • img (MedicalImage) – input image to be processed

  • offset (int or float) – value to shift intensity by

  • to_dtype (dtype attribute) – dtype of output

Returns: MedicalImage that has been processed

class LabelReplacer(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Replaces given set of labels in data with new labels.

pick(img: ai4med.common.medical_image.MedicalImage, input_labels, output_labels)

Replaces all input_labels in data to output_labels.

Parameters
  • img (MedicalImage) – image containing data.

  • input_labels (list) – Input indices for mapping.

  • output_labels (list) – Output indices to map to.

Returns

Image with new set of labels after mapping.

class LocationGeneratorCropper(size, batch_size, location_generator, batches_to_gen_at_once=1, dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Crops fixed sized sub-volumes from image with the centers being determined by the location generator. Different location generators can be used for different algorithms to choose the centers of volumes to be cropped. This cropper will automatically use SymmetricPadder to increase the image to the minimum size if necessary, padding with default mode constant and with the value 0.

Parameters
  • label_image – MedicalImage of the label data. This will be used for finding foreground/background

  • imgs – name of field for label image. This will be used for finding foreground/background

  • size – the size of the crop region e.g. [224,224,128]

  • batch_size – number of samples (crop regions) to take

  • location_generator – transform that picks centers based on image and label data

  • batches_to_gen_at_once – the number of batches of ROI centers that will be generated at once by the location generator. Rather than process the foreground/background and generating a batch of centers each time, if the same image will end up being used to generate centers multiple times, this variable can be set to determine the number of batches of centers to generate at once and then cache for better efficiency.

sample(label_img: ai4med.common.medical_image.MedicalImage, imgs, cache_id)
Parameters
  • label_img (MedicalImage) – label MedicalImage

  • imgs (list) – list of MedicalImages to be sampled from

  • cache_id – DataElementKey.ID from the transform context to uniquely identify the data element

Returns

(label_medical_image, img_medical_images) for MedicalImages that have been processed

class MultiFormatTransformer

Bases: object

Base class for multi-format transformer.

Following tensorflow’s NN format, 12 numpy data formats are specified based on image dimension, batch mode, and channel mode

transform(med_image, *args, **kwargs)
class NoiseAdder

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Adds noise to the entire image.

add(img: ai4med.common.medical_image.MedicalImage, noise)

Adds noise to the entire image.

Parameters
  • img – the source medical image

  • noise – numpy array with the same shape as img’s data

Returns

Instance of MedicalImage

class NonzeroIntensitiesNormalizer(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Normalize input to zero mean and unit std, based on non-zero elements only for each input channel individually.

Parameters

img – the MedicalImage to be processed

normalize(img: ai4med.common.medical_image.MedicalImage)
class Padder

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

determine_data_pad_width(out_size, data_shape)
class PropertyCopier(must_copy=True)

Bases: object

copy(from_img: ai4med.common.medical_image.MedicalImage, to_img: ai4med.common.medical_image.MedicalImage, property_names: list)

Pseudo multiplicative bias field for MRI

class RandomBiasField

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Generate multiplicative bias field and apply

apply(img: ai4med.common.medical_image.MedicalImage, degree=3, coeff_range=0.0, 0.1)

Apply random bias field.

Parameters
  • img – instance of MedicalImage

  • degree – degree of freedom of the polynomials

  • coeff_range – range of the random coefficients

Returns

MedicalImage

class RandomSampleLocationGenerator(size, batch_size, ratio_pos_neg=None, batches_to_gen_at_once=1)

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Generate valid sample locations based on image with option for specifying foreground ratio Valid: samples sitting entirely within image

Compared to the other RandomSampleLocationGenerators, this one only takes into account the label and ignores if the voxels in the image are 0 when determining the background. That makes this transform faster but can increase the chance areas with less foreground are selected.

Parameters
  • img – the MedicalImage containing foreground/background (typically label data)

  • size – size of the ROIs to be sampled

  • sample_number – total sample centers to be generated, equal to the batch_size * batches_to_gen_at_once

  • ratio_pos_neg (optional) – ratio of total locations generated that have center being foreground (label value > 0)

generate_centers(img: ai4med.common.medical_image.MedicalImage, label_img: ai4med.common.medical_image.MedicalImage)
Parameters
  • img – img is not used but still here to match the signatures of the other RandomSampleLocationGenerators

  • label_img (MedicalImage) – label MedicalImage to use to determine foreground and background

Returns

list of centers generated with specified Pos Neg Ratio.

class RandomSampleLocationGenerator2(size, batch_size, ratio_pos_neg=None, batches_to_gen_at_once=1)

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Generate valid sample locations based on image with option for specifying foreground ratio. This differs from RandomSampleLocationGenerator because this one takes the image into account as well as the label to find foreground and background: (image_entry not zero) and (label_entry == 0)

Parameters
  • img – MedicalImage containing volume

  • label_img – MedicalImage containing label with foreground/background

  • size – size of the ROIs to be sampled

  • sample_number – total sample centers to be generated, equal to the batch_size * batches_to_gen_at_once

  • ratio_pos_neg (optional) – ratio of total locations generated that have center being foreground

generate_centers(img: ai4med.common.medical_image.MedicalImage, label_img: ai4med.common.medical_image.MedicalImage)
Parameters
Returns

A list of centers picked from the foreground and background determined by img and label_img in a ratio

specified by ratio_pos_neg

class RandomSampleLocationGenerator3(size, batch_size, ratio_pos_neg=None, batches_to_gen_at_once=1)

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Generate valid sample locations based on image with option for specifying foreground ratio. This is like RandomSampleLocationGenerator2 which takes into account the image as well as the label to find foreground and background: (image_entry not zero) and (label_entry == 0)

The difference between this and RandomSampleLocationGenerator2 is that this uses the fast_crop algorithm, so the way in which the centers are picked is different. This transform uses True/False masks the size of the spatial shape corresponding to foreground and background, then randomly picks a point from the entire volume before checking if the value matches foreground or background, whatever is being picked that iteration. On the other hand, RandomSampleLocationGenerator2 uses True/False masks to get all valid background and foreground points to pick from, and then picks an arbitrary index to find such a point.

Parameters
  • img – MedicalImage containing volume

  • label_img – MedicalImage containing label with foreground/background

  • size – size of the ROIs to be sampled

  • sample_number – total sample centers to be generated, equal to the batch_size * batches_to_gen_at_once

  • ratio_pos_neg (optional) – ratio of total locations generated that have center being foreground

find_random_foreground_or_background(a, is_foreground=False)
find_random_foreground_or_background_naive(a, is_foreground=False)
generate_centers(img: ai4med.common.medical_image.MedicalImage, label_img: ai4med.common.medical_image.MedicalImage)
Parameters
Returns

A list of centers picked from the foreground and background determined by img and label_img in a ratio

specified by ratio_pos_neg

class RandomSizeWithDisplacementCropper(lower_size, max_displacement=50, keep_aspect=True, dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Crop randomly sized sub-volume from image between lower_size and the image size. The size of the resulting image after this transformation will vary.

Parameters
  • lower_size – lower limit of crop size, can be a list or tuple for the spatial dimensions of the image. lower_size can also be a single value in which case a list will be created automatically and lower_size for each dimension will be that value. int values in lower_size are interpreted as the size in voxels. Alternatively, lower_size can be type float, in which case it must be < 1, to represent the ratio of the cropped lower_size to the original image size.

  • max_displacement – max displacement from center of the input image to the center of the crop region. This can be one integer greater than or equal to 0, or a list or tuple of size equal to the spatial dimensions of the input images with an integer greater than or equal to 0 for each respective dimension.

  • keep_aspect – if true, then original aspect ratio is kept. if true, the first dimension of lower_size will be used and the rest will be ignored.

crop(imgs)
Parameters

imgs – List of MedicalImages to crop

Returns

List of cropped MedicalImages

class Sharpener(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Sharpen input with gaussian filter for each input channel individually.

Parameters

img – the MedicalImage to be processed

sharpen(img: ai4med.common.medical_image.MedicalImage)
class Smoother(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Smooth input with gaussian filter for each input channel individually.

Parameters

img – the MedicalImage to be processed

smooth(img: ai4med.common.medical_image.MedicalImage)
class SpatialFlipper(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Flips in spatial domain along given axis (ignore channel and batch).

flip(img: ai4med.common.medical_image.MedicalImage, flip_axis)

Flips img around given flip_axis.

Parameters
  • img (MedicalImage) – image containing data to be rotated.

  • flip_axis (list) – Axis to use for flipping.

Returns

flipped image.

class SpatialRotator2D(flags=1, border_mode=2, dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Rotate 2D images.

Parameters
  • flags – Interpolation mode. Default: cv2.INTER_LINEAR.

  • border_mode – Border mode. Default : cv2.BORDER_REFLECT.

  • dtype – np array type to convert to after rotation happens.

rotate(img: ai4med.common.medical_image.MedicalImage, angle)

Rotate the spatial data of 2D Image by given angle.

Parameters
  • img (MedicalImage) – image containing data to be rotated.

  • angle (float) – Angle of rotation in degrees in counter clockwise direction.

Returns

Image rotated by given angle.

class SpatialRotator3D(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Rotates a given 3D array by multiple of 90 degrees.

rotate(img: ai4med.common.medical_image.MedicalImage, num_rotations=1, axis=None)

Rotate the spatial data of image by 90 degrees x num_rotations.

Parameters
  • img (MedicalImage) – image containing data to be rotated.

  • num_rotations (int) – Num of times to rotate. Default 1

  • axis (list) – Axis for rotation. If None, default axis is presumed based on shape.

Returns

Image rotated by 90 degrees x num_rotations.

class SpatialScaler(dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Scales spatial domain (ignore channel and batch)

scale_by_factor(img: ai4med.common.medical_image.MedicalImage, factor, is_label)
scale_by_resolution(img: ai4med.common.medical_image.MedicalImage, target_resolution, is_label)
scale_by_spacing(img: ai4med.common.medical_image.MedicalImage, target_spacing, is_label)
scale_to_original_shape(img: ai4med.common.medical_image.MedicalImage, is_label)
scale_to_shape(img: ai4med.common.medical_image.MedicalImage, target_shape, is_label)
class SpatialSmoothScaler(se, dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Scales spatial domain with ndimage.binary_closing to smooth.

smooth_scale_to_shape(img: ai4med.common.medical_image.MedicalImage, target_shape, nearest)
class SpatialZoomer(use_gpu=False, keep_size=False, dtype= )

Bases: <a href="#ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer">ai4med.libs.transforms.multi_format_transformer.MultiFormatTransformer</a>

Zooms a 3d image. Batch and channel dimensions are not zoomed.

Parameters
  • use_gpu (bool) – Should use cpu or gpu

  • keep_size (bool) – Should keep original size (pad if needed)

zoom(img: ai4med.common.medical_image.MedicalImage, interpolation, zoom=1)

Zooms a 3d image. Batch and channel dimensions are not zoomed.

Parameters
  • img (MedicalImage) – image containing data.

  • interpolation – Interpolation type (defined in ImageOps.INTERPOLATION_*)

  • zoom (list) – Amount to zoom in each spatial dimension. (<1 means zoom out. >1 means zoom in. Default 1)

Returns

Image with spatial data zoomed according to given zoom factor.

class SymmetricDivPadder

Bases: <a href="#ai4med.libs.transforms.padder.Padder">ai4med.libs.transforms.padder.Padder</a>

Performs padding on both sides of data for each dimension. The padded image size is divisible by an given integer. This component calculates the nearest pad size, out_size, and computes difference between the out_size and the size of the data and divides it by 2 for the amount to pad on each side. If the number of cells that need to be padded is odd, there will be one more added to the end of the data compared to the beginning. Uses np.pad so in practice, a mode needs to be provided. See numpy.lib.arraypad.pad for additional details.

Parameters
  • img – the MedicalImage to be processed

  • div_int – the integer the pad size is divisible by

  • mode – str or function. A portion from numpy.lib.arraypad.pad is copied below.

Copy
Copied!
            

One of the following string values or a user supplied function. 'constant' Pads with a constant value. Default is 0. 'edge' Pads with the edge values of array. 'linear_ramp' Pads with the linear ramp between end_value and the array edge value. 'maximum' Pads with the maximum value of all or part of the vector along each axis. 'mean' Pads with the mean value of all or part of the vector along each axis. 'median' Pads with the median value of all or part of the vector along each axis. 'minimum' Pads with the minimum value of all or part of the vector along each axis. 'reflect' Pads with the reflection of the vector mirrored on the first and last values of the vector along each axis. 'symmetric' Pads with the reflection of the vector mirrored along the edge of the array. 'wrap' Pads with the wrap of the vector along the axis. The first values are used to pad the end and the end values are used to pad the beginning. <function> Padding function, see Notes.

determine_data_pad_width(out_size, shape)
pad(img: ai4med.common.medical_image.MedicalImage, div_int: int, mode: str, **kwargs)
class SymmetricPadder

Bases: <a href="#ai4med.libs.transforms.padder.Padder">ai4med.libs.transforms.padder.Padder</a>

Performs padding on both sides of data for each dimension. This component calculates the difference between the out_size and the size of the data and divides it by 2 for the amount to pad on each side. If the number of cells that need to be padded is odd, there will be one more added to the end of the data compared to the beginning. Uses np.pad so in practice, a mode needs to be provided. See numpy.lib.arraypad.pad for additional details.

Parameters
  • img – the MedicalImage to be processed

  • out_size – the size of region of interest at the end of the operation

  • mode – str or function. A portion from numpy.lib.arraypad.pad is copied below.

Copy
Copied!
            

One of the following string values or a user supplied function. 'constant' Pads with a constant value. Default is 0. 'edge' Pads with the edge values of array. 'linear_ramp' Pads with the linear ramp between end_value and the array edge value. 'maximum' Pads with the maximum value of all or part of the vector along each axis. 'mean' Pads with the mean value of all or part of the vector along each axis. 'median' Pads with the median value of all or part of the vector along each axis. 'minimum' Pads with the minimum value of all or part of the vector along each axis. 'reflect' Pads with the reflection of the vector mirrored on the first and last values of the vector along each axis. 'symmetric' Pads with the reflection of the vector mirrored along the edge of the array. 'wrap' Pads with the wrap of the vector along the axis. The first values are used to pad the end and the end values are used to pad the beginning. <function> Padding function, see Notes.

determine_data_pad_width(out_size, shape)
pad(img: ai4med.common.medical_image.MedicalImage, out_size, mode: str, **kwargs)
class Transformer

Bases: object

© Copyright 2020, NVIDIA. Last updated on Feb 2, 2023.