L4T Multimedia API Reference

31.1 Release

 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
Backend

Overview

This application implements a typical appliance performing intelligent video analytics. Application areas include public safety, smart cities, and autonomous machines. This example demonstrates four (4) concurrent video streams going through a decoding process using the on-chip decoders, video scaling using on chip scalar, and GPU compute. For simplicity of demonstration, only one of the channels uses NVIDIA® TensorRT to perform object identification and generate bounding box around the identified object. This sample also uses video converter functions for various format conversions. It also uses EGLImage to demonstrate buffer sharing and image display.

In this sample, object detection is limited to identifying cars in video streams of 960 x 540 resolution, running up to 14 FPS. The network is based on GoogleNet. The inference is performed on a frame-by-frame basis and no object tracking is involved. Note that this network is intended to be an example that shows how to use TensorRT to quickly build the compute pipeline. The sample includes trained GoogleNet, which was trained with NVIDIA Deep Learning GPU Training System (DIGITS). The training was done with roughly 3000 frames taken from 5-10 feet elevation. Varying levels of detection accuracy are expected based on the video samples fed in. Given that this sample is locked to perform at half-HD resolutions under 10 FPS, video feeds with higher FPS for inference will show stuttering during playback.

This sample does not require a camera.

Building and Running

Prerequisites

  • You have followed Steps 1-3 in Building and Running.
  • You have installed:
    • CUDA Toolkit
    • OpenCV
  • Optionally, you have installed NVIDIA® TensorRT (previously known as GPU Inference Engine (GIE))

To build

  1. If you want to run the sample without TensorRT, set the following in the Makefile:
      ENABLETRT := 0
    
    By default, TensorRT is enabled.
  2. Enter:
      $ cd backend
      $ make
    

To run

  • Enter:
     $ ./backend 1 ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 \
        --trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt \
        --trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel \
        --trt-mode 0 --trt-proc-interval 1 -fps 10
    

To quit

  • Enter q.

To view command-line options

  • Enter:
     $ cd backend
     $ ./backend -h
    


Flow

The following image shows the movement of data through the sample when TensorRT is not enabled.

The following image shows data flow details for the channel using TensorRT.

NvEGLImageFromFd is an NVIDIA API that returns an EGLImage pointer from the file descriptor buffer that is allocated via the Tegra mechanism. TensorRT then uses the EGLImage buffer to render the bounding box to the image.

X11 Details

For X11 technical details, see:

http://www.x.org/docs/X11/xlib.pdf


Key Structure and Classes

The context_t structure (backend/v4l2_backend_test.h) manages all resources in sample applications.

ElementDescription
NvVideoDecoderContains all video decoding-related elements and functions.
NvVideoConverterContains elements and functions for video format conversion.
NvEglRendererContains all EGL display rendering-related functions.
EGLImageKHRThe EGLImage used for CUDA processing. This type is from the EGL open source graphical library.

NvVideoDecoder

The NvVideoDecoder class creates a new V4L2 Video Decoder. The following table describes the key NvVideoDecoder members that this sample uses.

MemberDescription
NvV4l2Element::output_plane Holds the V4L2 output plane.
NvV4l2Element::capture_plane Holds the V4L2 capture plane.
NvVideoDecoder::createVideoDecoder Static function to create video decode object.
NvV4l2Element::subscribeEvent Subscribes event.
NvVideoDecoder::setExtControls Sets external control to V4L2 device.
NvVideoDecoder::setOutputPlaneFormat Sets output plane format.
NvVideoDecoder::setCapturePlaneFormat Sets capture plane format.
NvV4l2Element::getControl Gets the value of a control setting.
NvV4l2Element::dqEvent Dequeues the devent reported by the V4L2 device.
NvV4l2Element::isInError Checks if under error state.

NvVideoConverter

The NvVideoConverter class packages all video converting related elements and functions. It performs color space conversion, scaling and conversion between hardware buffer memory and software buffer memory. The following table describes the key NvVideoConverter members that this sample uses.

MemberDescription
NvV4l2Element::output_plane Holds the output plane.
NvV4l2Element::capture_plane Holds the capture plane.
NvVideoConverter::waitForIdle Waits until all the buffers queued on the output plane are converted and dequeued from the capture plane. This is a blocking method.
NvVideoConverter::setCapturePlaneFormat Sets the format on the converter capture plane.
NvVideoConverter::setOutputPlaneFormat Sets the format on the converter output plane.

NvVideoDecoder and NvVideoConverter contain two key elements: output_plane and capture_plane. These objects are instantiated from the NvV4l2ElementPlane class type.

NvV4l2ElementPlane

NvV4l2ElementPlane creates an NVv4l2Element plane. The following table describes the key NvV4l2ElementPlane members used in this sample. v4l2_buf is a local variable inside the NvV4l2ElementPlane::dqThreadCallback function and, thus, the scope exists only inside the callback function. If other functions of the sample must access this buffer, a prior copy of the buffer inside callback function is required.

Member Description
NvV4l2ElementPlane::setupPlane Sets up the plane of V4l2 element.
NvV4l2ElementPlane::deinitPlane Destroys the plane of V4l2 element.
NvV4l2ElementPlane::setStreamStatus Starts/Stops the stream.
NvV4l2ElementPlane::setDQThreadCallback Sets the callback function of the dqueue buffer thread.
NvV4l2ElementPlane::startDQThread Starts the thread of the dqueue buffer.
NvV4l2ElementPlane::stopDQThread Stops the thread of the dqueue buffer.
NvV4l2ElementPlane::qBuffer Queues a V4l2 buffer from the plane.
NvV4l2ElementPlane::dqBuffer Dequeues a V4l2 buffer from the plane.
NvV4l2ElementPlane::getNumBuffers Gets the number of the V4l2 buffer.
NvV4l2ElementPlane::getNumQueuedBuffers Gets the number of the V4l2 buffer in the queue.
NvV4l2ElementPlane::getNthBuffer Gets the NvBuffer queue object at index N.

TRT_Context

TRT_Context provides a series of interfaces to load Caffe model and perform inference. The following table describes the key TRT_Context members used in this sample.

TRT_ContextDescription
TRT_Context::destroyTrtContext Destroys the TRT_context.
TRT_Context::getNumTrtInstances Gets the number of TRT_context instances.
TRT_Context::doInference Interface for inference after TensorRT model is loaded.

Functions to Create/Destroy EGLImage

The sample uses 2 global functions to create and destroy EGLImage from dmabuf file descriptor. These functions are defined in nvbuf_utils.h.

Global FunctionDescription
NvEGLImageFromFd() Creates EGLImage from dmabuf fd.
NvDestroyEGLImage() Destroys the EGLImage.