L4T Multimedia API Reference

32.3.1 Release

 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
Building and Running

You can run the samples on Jetson without rebuilding them. However, if you modify those samples, you must rebuild them before running them.

For information on building the samples on a host Linux PC (x86), see Setting Up Cross-Platform Support.

Build and run the samples by following the procedures in this document:

  1. Export environment variables.
  2. Use Jetpack to install these programs:
    • NVIDIA® CUDA®
    • Opencv
    • cuDNN
    • NVIDIA® TensorRT, previously known as GIE
  3. Create symbolic links.
  4. Optionally, set up cross-compiler support.
  5. Build and run the samples.

Step 1. Export environment variables

  • Export the XDisplay with the following command:
      $ export DISPLAY=:0
    

Step 2: Use Jetpack to install CUDA/OpenCV/cuDNN/TensorRT

If you have already installed these libraries, you can skip the following steps.

  1. Download Jetpack from the following website:
     https://developer.nvidia.com/embedded/downloads
    
  2. Run the installation script from the host machine with the following commands:
     $ chmod +x  ./JetPack-L4T-<version>-linux-x64.run
     $ ./JetPack-L4T-<version>-linux-x64.run
    
  3. Select Development Environment.
  4. Select "custom" and click "clear action".
  5. Select "CUDA Toolkit", "OpenCV","cuDNN Package" and "TensorRT", and then install.
  6. For installation details, see the _installer folder.

Step 3: Create symbolic links

  • Create symbolic links with the following commands:
     $ cd /usr/lib/aarch64-linux-gnu
     $ sudo ln -sf libv4l2.so.0 libv4l2.so
    

Step 4: Set up cross-compiler support (Optional)

Step 5: Build and run the samples

  • Build and run, as described for each sample.
    Directory Location Relative to ll_samples/samples Description
    00_video_decode Decodes H.264, H.265, VP8, VP9, MPEG4, and MPEG2 video from a local file and then shares the YUV buffer with egl renderer.
    01_video_encode Encodes YUV bitstream from local file and then write elementary H.264/H.265 into file.
    02_video_dec_cuda Decodes H.264/H.265 video from a local file and then shares the YUV buffer with CUDA to draw a black box in the left corner.
    03_video_cuda_enc Use CUDA to draw a black box in the YUV buffer and then feeds it to video encoder to generate an H.264/H.265 video file.
    04_video_dec_trt Uses simple TensorRT calls to save the bounding box info to a file.
    05_jpeg_encode Uses libjpeg-8b APIs to encode JPEG images from software-allocated buffers.
    06_jpeg_decode Uses libjpeg-8b APIs to decode a JPEG image from software-allocated buffers.
    07_video_convert Uses V4L2 APIs to do video format conversion and video scaling.
    08_video_dec_drm Uses the NVIDIA® Tegra® Direct Rendering Manager (DRM) to render video stream or UI.
    09_camera_jpeg_capture Simultaneously uses Libargus API to preview camera stream and libjpeg-8b APIs to encode JPEG images.
    10_camera_recording Gets the real-time camera stream from the Libargus API and feeds it into the video encoder to generate H.264/H.265 video files.
    12_camera_v4l2_cuda Captures images from a V4L2 camera and shares the stream with CUDA engines to draw a black box in the upper left corner.
    13_multi_camera Captures multiple cameras and composites them to one frame.
    14_multivideo_decode Decodes multiple H.264, H.265, VP8, VP9, MPEG4, and MPEG2 videos from local files and writes YUV buffer into corresponding files.
    unittest_samples/decoder_unit_sample Unit level sample which decodes H.264 video from a local file and dumps the raw YUV buffer.
    unittest_samples/encode_sample Unit level sample which encodes YUV bitstream from a local file and writes elementary H.264 bitstream into file.
    unittest_samples/transform_unit_sample Unit level sample which uses nvbuf_utils utility and converts one colorspace YUV bitstream to another.
    Backend Performs intelligent video analytics on four concurrent video streams going through a decoding process using the on chip decoders, video scaling using on chip scalar, and GPU compute.
    Frontend Performs independent processing on four different resolutions of video capture coming directly from camera.
    v4l2cuda (capture-cuda) Uses V4L2 image capturing with CUDA format conversion.


Tool Name Description Directory Location
CAFFE to TensorRT Model Tool TBD tools/ConvertCaffeToTrtModel


For details on each sample's structure and the APIs they use, see Multimedia API Sample Applications in this reference.