DriveWorks SDK Reference
4.0.0 Release
For Test and Development only

Data Conditioner Workflow

This code snipped shows how the Data Conditioner module is typically used. Note that error handling is left out for clarity.


Initialize DataConditioner parameters with default values.

In order to initialize the Data Conditioner module, it is first required to initialize the Data Conditioner parameters

dwDataConditionerParams() permits to set the following parameters:

  • float32_t meanValue[DW_MAX_IMAGE_PLANES]: mean value to be subtracted from each input image pixel. Default is the 0-vector. This shall be used if the network has been trained with mean-centered data.
  • dwImageCUDA *meanImage: mean image to be subtracted from each input image. meanImage is expected to be float16 or float32. The pixel format is required to be R or RGBA with interleaved channels. The dimensions of the mean image must meet the dimensions of the network input. Default is the null pointer. This is an alternative to meanValue, if a specific mean image is to be subtracted. Note: if both meanValue and meanImage are provided, both values are subtracted.
  • dwBool splitPlanes: flag to indicate whether the image is in interleaved (false) or planar (true) format. Default is false.
  • float32_t scaleCoefficient: Scale pixel intensities. Default is 1.0. It shall be used if the network has been trained with images whose pixel values has been scaled to a specified range, e.g., in [0, 1]. If scaleCoefficient is 1.0, the output pixel intensities are always ranged between [0,255], regardless of the input pixel intensity range.
  • dwBool ignoreAspectRatio: Indicates whether aspect ratio must be ignored during scaling operation. Default is false.
  • dwBool doPerPlaneMeanNormalization: Indicates whether per-plane mean normalization must be performed. If true, mean value is computed for each plane on the image and this value is subtracted from pixel intensities of the corresponding plane.

Modify parameters as required by the network or by the application.

// Ignore aspect ratio of input image is to be scaled.
params.ignoreAspectRatio = true;
// Set mean value to {127.5, 127.5, 127.5}.
params.meanValue[0] = 127.5;
params.meanValue[1] = 127.5;
params.meanValue[2] = 127.5;

Once the Data Conditioner parameters have been defined, the Data Conditioner object can be initialized. There are two ways to accomplish this:

const dwBlobSize *networkInputBlobSize,
const dwDataConditionerParams *dataConditionerParams,
cudaStream_t stream, dwContextHandle_t ctx);

In this case, the user is required to provide the network input blob dimensions, which can be acquired from dwDNN module via dwDNN_getInputSize() once the network is loaded. The batch size in networkInputBlobSize can be modified to allow for a batch of images to be prepared in parallel.

const dwDNNTensorProperties* outputProperties,
uint32_t maxNumImages,
const dwDataConditionerParams* dataConditionerParams,
cudaStream_t stream, dwContextHandle_t ctx);

This initialization function requires dwDNNTensorProperties, which can also be acquired from dwDNN module via dwDNN_getInputTensorProperties(). maxNumImages determines how many images can be prepared in parallel.

ctx is assumed to be a previously initialized dwContextHandle_t.

Data Preparation

Allocate CUDA memory to store the output of DataConditioner module.

// Compute the number elements to store in output
size_t dataConditionerOutputTotalSize = dnnInputBlobSize.batch * dnnInputBlobSize.channels * dnnInputBlobSize.height * dnnInputBlobSize.width;
cudaMalloc(&dataConditionerOutput, sizeof(float32_t) * dataConditionerOutputTotalSize);

Perform operations on a given image and store the result in dataConditionerOutput. All operations are performed asynchronously with the host code. There are two possible ways to execute data preparation: with raw pointers and with dwDNNTensorHandle_t.

With raw pointers:

// Set a region of interest in the given image. In this case, region of interest is the whole image.
dwRect regionOfInterest{};
regionOfInterest.width = inputImage.prop.width;
regionOfInterest.height = inputImage.prop.height;
// Prepare image using DataConditioner
dwDataConditioner_prepareDataRaw(dataConditionerOutput, &inputImage, 1, &regionOfInterest, cudaAddressModeClamp, dataConditioner);

With dwDNNTensorHandle_t:

// Set a region of interest in the given image. In this case, region of interest is the whole image.
dwRect regionOfInterest{};
regionOfInterest.width = inputImage.prop.width;
regionOfInterest.height = inputImage.prop.height;
dwDNNTensorHandle_t tensorOutput;
// Allocate tensor output ...
// Prepare image using DataConditioner
dwDataConditioner_prepareData(tensorOutput, &inputImage, 1, &regionOfInterest, cudaAddressModeClamp, dataConditioner);

inputImages contains pointers to the images of the batch that shall be prepared. The number of images in inputImages is given through numImages and shall not exceed the (possibly modified) batch size of the network. The regionOfInterest parameter defines a specific region in all images to which the desired transformations as well as network inference shall be applied. The region of interest is identified by the coordinates of the top left corner, and the width and the height of the rectangle. If full images are of interest, then the top left corner must be set as (0,0), width and height according to the images at hand. The internal resizing of roi to match the network input size is defined in such a way that no image content is lost, but undefined border values might be created. addressMode shall be used to set the fill-up strategy for the undefined border values. Two modes are allowed cudaAddressModeBorder and cudaAddressModeClamp.

In a nutshell, dwDataConditioner_prepareData() or dwDataConditioner_prepareDataRaw() crops and resizes the defined ROI from each input image in order to match the network input size, applies the desired transformations, and returns the transformed image batch in dataConditionerOutput, which can be used as input to dwDNN_infer() (see DNN module Inference).

Run Inference and Transform Output to Input Image Space

// Prepared data is usually given to dwDNN module for inference.

Assuming that the network used in runInference returns coordinates relative to the output of the network, the interpreted output of the network is transformed back from the network input coordinate frame to the input image space.

where the point to be transformed back is passed through inputX and inputY and is returned in outputX and outputY. The same regionOfInterest passed to dwDataConditioner_prepareData() shall be used.

Free the Memory

Finally, free previously allocated memory.

// Free resources

For more information see: