L4T Multimedia API Reference

28.1 Release

 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
Building and Running

You can run the samples on Jetson without rebuilding them. However, if you modify those samples, you must rebuild them before running them.

For information on building the samples on a host Linux PC (x86), see Setting Up Cross-Platform Support.

Build and run the samples by following the procedures in this document:

  1. Export environment variables.
  2. Use Jetpack to install these programs:
    • Opencv4tegra
    • cuDNN
    • NVIDIA® TensorRT, previously known as GIE
  3. Create symbolic links.
  4. Optionally, set up cross-compiler support.
  5. Build and run the samples.

Step 1. Export environment variables

  • Export the XDisplay with the following command:
      $ export DISPLAY=:0

Step 2: Use Jetpack to install CUDA/Opencv4tegra/cuDNN/TensorRT

If you have already installed these libraries, you can skip the following steps.

  1. Download Jetpack from the following website:
  2. Run the installation script from the host machine with the following commands:
     $ chmod +x  ./JetPack-L4T-<version>-linux-x64.run
     $ ./JetPack-L4T-<version>-linux-x64.run
  3. Select "Jetson TX2 Development Kit(64bit) and Ubuntu host".
  4. Select "custom" and click "clear action".
  5. Select "CUDA Toolkit for L4T", "OpenCV for Tegra","cuDNN Package" and "TensorRT", and then install.
  6. For installation details, see the _installer folder.

Step 3: Create symbolic links

  • Create symbolic links with the following commands:
     $ cd /usr/lib/aarch64-linux-gnu
     $ sudo ln -sf tegra-egl/libEGL.so.1 libEGL.so
     $ sudo ln -sf tegra-egl/libGLESv2.so.2 libGLESv2.so
     $ sudo ln -sf libv4l2.so.0 libv4l2.so

Step 4: Set up cross-compiler support (Optional)

Step 5: Build and run the samples

  • Build and run, as described for each sample.
    Directory Location Relative to ll_samples/samples Description
    02_video_dec_cuda Decodes H.264/H.265 video from a local file and then shares the YUV buffer with CUDA to draw a black box in the left corner.
    03_video_cuda_enc Use CUDA to draw a black box in the YUV buffer and then feeds it to video encoder to generate an H.264/H.265 video file.
    04_video_dec_trt Uses simple TensorRT calls to save the bounding box info to a file.
    05_jpeg_encode) Uses libjpeg-8b APIs to encode JPEG images from software-allocated buffers.
    06_jpeg_decode Uses libjpeg-8b APIs to decode a JPEG image from software-allocated buffers.
    07_video_convert Uses V4L2 APIs to do video format conversion and video scaling.
    09_camera_jpeg_capture Simultaneously uses Libargus API to preview camera stream and libjpeg-8b APIs to encode JPEG images.
    10_camera_recording Gets the real-time camera stream from the Libargus API and feeds it into the video encoder to generate H.264/H.265 video files.
    11_camera_object_identification Gets the real-time camera stream from the Libargus API and feeds it to Caffe for object classification.
    12_camera_v4l2_cuda Captures images from a V4L2 camera and shares the stream with CUDA engines to draw a black box in the upper left corner.
    Backend Performs intelligent video analytics on four concurrent video streams going through a decoding process using the on chip decoders, video scaling using on chip scalar, and GPU compute.
    Frontend Performs independent processing on four different resolutions of video capture coming directly from camera.

Tool Name Description Directory Location
CAFFE to TensorRT Model Tool TBD tools/ConvertCaffeToTrtModel

For details on each sample's structure and the APIs they use, see Multimedia API Sample Applications in this reference.