![]() |
Jetson Linux Multimedia API Reference32.4.2 Release |
This toolkit includes NVIDIA Multimedia API sample applications that you can use as building blocks to construct applications for your product use case, such as:
The sample applications demonstrate how to use the Multimedia API and other libraries, such as:
The following table describes the samples.
Directory Location Relative to ll_samples/samples | Description |
---|---|
00_video_decode | Decodes H.264, H.265, VP8, VP9, MPEG4, and MPEG2 video from a local file and then shares the YUV buffer with egl renderer. |
01_video_encode | Encodes YUV bitstream from local file and then write elementary H.264/H.265 into file. |
02_video_dec_cuda | Decodes H.264/H.265 video from a local file and then shares the YUV buffer with CUDA to draw a black box in the left corner. |
03_video_cuda_enc | Use CUDA to draw a black box in the YUV buffer and then feeds it to video encoder to generate an H.264/H.265 video file. |
04_video_dec_trt | Uses simple TensorRT calls to save the bounding box info to a file. |
05_jpeg_encode | Uses libjpeg-8b APIs to encode JPEG images from software-allocated buffers. |
06_jpeg_decode | Uses libjpeg-8b APIs to decode a JPEG image from software-allocated buffers. |
07_video_convert | Uses V4L2 APIs to do video format conversion and video scaling. |
08_video_dec_drm | Uses the NVIDIA® Tegra® Direct Rendering Manager (DRM) to render video stream or UI. |
09_camera_jpeg_capture | Simultaneously uses Libargus API to preview camera stream and libjpeg-8b APIs to encode JPEG images. |
10_camera_recording | Gets the real-time camera stream from the Libargus API and feeds it into the video encoder to generate H.264/H.265 video files. |
12_camera_v4l2_cuda | Captures images from a V4L2 camera and shares the stream with CUDA engines to draw a black box in the upper left corner. |
13_multi_camera | Captures multiple cameras and composites them to one frame. |
14_multivideo_decode | Decodes multiple H.264, H.265, VP8, VP9, MPEG4, and MPEG2 videos from local files and writes YUV buffer into corresponding files. |
15_multivideo_encode | Encodes multiple YUV bitstreams from local files and writes elementary H.264/H.265/VP8/VP9 into corresponding files. |
unittest_samples/decoder_unit_sample | Unit level sample; decodes H.264 video from a local file and dumps the raw YUV buffer. |
unittest_samples/encode_sample | Unit level sample; encodes YUV bitstream from a local file and writes elementary H.264 bitstream into file. |
unittest_samples/transform_unit_sample | Unit level sample; uses nvbuf_utils utility to convert one colorspace YUV bitstream to another. |
Backend | Performs intelligent video analytics on four concurrent video streams going through a decoding process using the on chip decoders, video scaling using on chip scalar, and GPU compute. |
Frontend | Performs independent processing on four different resolutions of video capture coming directly from camera. |
v4l2cuda (capture-cuda) | Uses V4L2 image capturing with CUDA format conversion. |
There is one tool that the samples can use.
Tool Name | Description | Directory Location |
---|---|---|
CAFFE to TensorRT Model Tool | Standalone tool that converts a CAFFE network to a TensorRT-compatible model and saves the model stream into local file for subsequent usage. | tools/ConvertCaffeToTrtModel |
The following diagram illustrates the L4T software stack. Of the Multimedia box (center, bottom), the Multimedia API provides the following components:
With these components, you can access the other libraries, such as TensorRT and cuDNN.