VPI - Vision Programming Interface

1.2 Release

Temporal Noise Reduction

Overview

This algorithm is used to reduce both spatial and temporal noise in video sequences. Noise levels can vary significantly depending on lighting conditions, camera sensor sensibility and quality, etc. VPI's temporal noise reduction is more suited to handle thermal and shot noise, which follow gaussian and poisson distributions respectively.

There are currently 3 versions of the algorithm, with varying degrees of speed, configurability and quality. Not all always available in a given backend and platform combination.

Noise reduction factor can be customized by means of two parameters: scene lighting condition and strength (a value between 0 and 1). Setting it to low light scenes results in increased reduction strength, but might result in loss of detail in highly textured regions or even some ghosting effect. Using bright light mode does the opposite: less ghosting and mode details preserved, but reduced noise reduction factor.

The example below shows the temporal noise reduction in action. Hover the mouse pointer over the video and click the maximize button to see more details.

Noisy input Parameters De-noised output
scene: outdoor medium light
strength: 1.0
version: 3
Note
Video output requires HTML5-capable browser that supports mp4 video decoding.

Implementation

Depending on the chosen algorithm version, the algorithm employs different techniques such as bilateral filtering for handling spatial noise and/or a temporal IIR filter associated with a motion detector.

Algorithm Version

There are 3 versions of the algorithm, each one with different speed/configurability/quality trade-offs:

  • VPI_TNR_V1 - Offers basic noise reduction that works well when noise levels aren't too high. Lighting conditions and noise reduction strength are fixed and can't be configured, but in general it provides good speed.
  • VPI_TNR_V2 - Offers improved noise reduction, and can be configured for a particular lighting condition. Still provides good speed.
  • VPI_TNR_V3 - Offers best noise reduction quality and configurability, in exchange of some performance decrease.

Availability of each level depends on backend and device, see Limitations and Constraints for more information. To pick the best available version, just use VPI_TNR_DEFAULT when specifying algorithm version.

Scene Lighting Condition

Versions 2 and 3 allow the user to specify of scene's lighting condition, which in turn controls the noise reduction strength for a particular scene. Higher strength will lead to less noise, but sometimes loss of detail in highly textured regions and some ghosting, especially around fast moving objects. If input noise is high, due to poor lighting and/or higher sensor sensibility, this might be a suitable trade-off.

The following scene presets are available:

For each preset, user can define a strength factor, basically a floating point number from 0 (least amount of noise reduction) and 1 (maximum amount). 0.5 is a good default to be picked in most cases.

Usage

Language:
  1. Import VPI module
    import vpi
  2. Initialization phase
    1. Create the Temporal Noise Reduction object, configuring it to handle NV12_ER inputs. The CUDA backend will be used to execute the algorithm.
      with vpi.Backend.CUDA:
      tnr = vpi.TemporalNoiseReduction(size, vpi.Format.NV12_ER, preset=vpi.TNRPreset.INDOOR_MEDIUM_LIGHT, strength=1)
  3. Processing phase
    1. Fetch a new frame from input video sequence. Here it's assumed that it's already VPI image with NV12_ER format.
      while inVideo.read(input)[0]:
    2. Feed it into the TNR object. A denoised version of the input is returned.
      denoised = tnr(input)
  1. Initialization phase
    1. Include the header that defines the temporal noise reduction algorithm
      Declares functions that implement the Temporal Noise Reduction algorithm.
    2. Create the image that will store each frame fetched from the video source. Frames are to be fetched directly as NV12. If instead they are in a format not supported by the algorithm, Convert Image Format can be used to implement the required conversions.
      VPIImage input;
      vpiImageCreate(width, height, VPI_IMAGE_FORMAT_NV12_ER, 0, &input);
      @ VPI_IMAGE_FORMAT_NV12_ER
      YUV420sp 8-bit pitch-linear format with full range.
      Definition: ImageFormat.h:194
      struct VPIImageImpl * VPIImage
      A handle to an image.
      Definition: Types.h:215
      VPIStatus vpiImageCreate(int32_t width, int32_t height, VPIImageFormat fmt, uint32_t flags, VPIImage *img)
      Create an empty image instance with the specified flags.
    3. Create the output and previous output image buffers with same dimensions and type as input.
      VPIImage prevOutput, output;
      vpiImageCreate(width, height, VPI_IMAGE_FORMAT_NV12_ER, 0, &prevOutput);
      vpiImageCreate(width, height, VPI_IMAGE_FORMAT_NV12_ER, 0, &output);
    4. Create the algorithm payload. It'll process video sequence of a scene with moderate lighting conditions, with maximum strength (strength = 1).
      struct VPIPayloadImpl * VPIPayload
      A handle to an algorithm payload.
      Definition: Types.h:227
      @ VPI_BACKEND_CUDA
      CUDA backend.
      Definition: Types.h:93
      VPIStatus vpiCreateTemporalNoiseReduction(uint32_t backends, int32_t width, int32_t height, VPIImageFormat imgFormat, VPITNRVersion version, VPIPayload *payload)
      Creates a payload for Temporal Noise Reduction algorithm.
      @ VPI_TNR_DEFAULT
      Chooses the version with best quality available in the current device and given backend.
    5. Create the stream where the algorithm will be submitted for execution.
      VPIStream stream;
      vpiStreamCreate(0, &stream);
      struct VPIStreamImpl * VPIStream
      A handle to a stream.
      Definition: Types.h:209
      VPIStatus vpiStreamCreate(uint32_t flags, VPIStream *stream)
      Create a stream instance.
  2. Processing phase
    1. Fetch a new frame from input video sequence and write it into input.
      while (FetchFrame(vid, &input))
      {
    2. Submit it to the stream for processing. If it's the first frame, 2nd parameter, prevFrame must be NULL, or else the output of previous iteration must be passed.
      VPITNRParams params;
      params.strength = 1.0f;
      if (isFirstFrame)
      {
      vpiSubmitTemporalNoiseReduction(stream, VPI_BACKEND_CUDA, tnr, NULL, input, output, &params);
      }
      else
      {
      vpiSubmitTemporalNoiseReduction(stream, VPI_BACKEND_CUDA, tnr, prevOutput, input, output, &params);
      }
      float strength
      Noise reduction strength.
      VPITNRPreset preset
      Scene preset to be used.
      VPIStatus vpiInitTemporalNoiseReductionParams(VPITNRParams *params)
      Initializes vpiSubmitTemporalNoiseReduction with default values.
      VPIStatus vpiSubmitTemporalNoiseReduction(VPIStream stream, uint32_t backend, VPIPayload payload, VPIImage prevFrame, VPIImage curFrame, VPIImage outFrame, const VPITNRParams *params)
      Submits a Temporal Noise Reduction operation to the stream associated with the given payload.
      @ VPI_TNR_PRESET_OUTDOOR_MEDIUM_LIGHT
      Medium light outdoor scene.
      Structure that defines the parameters for vpiSubmitTemporalNoiseReduction.
    3. (optional) Sync to the stream and use/display the de-noised frame.
      vpiStreamSync(stream);
      VPIStatus vpiStreamSync(VPIStream stream)
      Blocks the calling thread until all submitted commands in this stream queue are done (queue is empty)...
    4. Swap output and prevOutput. In next iteration, output will be input as prevOutput, and current prevOutput buffer will receive the next de-noised frame.
      VPIImage tmp = output;
      output = prevOutput;
      prevOutput = tmp;
      }
  3. Cleanup phase
    1. Free resources held by the stream, the payload, and the input and output images.
      vpiImageDestroy(prevOutput);
      vpiImageDestroy(output);
      void vpiImageDestroy(VPIImage img)
      Destroy an image instance.
      void vpiPayloadDestroy(VPIPayload payload)
      Deallocates the payload object and all associated resources.
      void vpiStreamDestroy(VPIStream stream)
      Destroy a stream instance and deallocate all HW resources.

Consult the Temporal Noise Reduction sample application for a complete example.

For more information, see Temporal Noise Reduction in the "API Reference" section of VPI - Vision Programming Interface.

Limitations and Constraints

Constraints for specific backends supersede the ones specified for all backends.

All Backends

  • Input and output images must have the same dimensions and format.

CUDA

VIC

CPU and PVA

  • Not implemented

Performance

For information on how to use the performance table below, see Algorithm Performance Tables.
Before comparing measurements, consult Comparing Algorithm Elapsed Times.
For further information on how performance was benchmarked, see Performance Benchmark.

 -