You can run the samples on Jetson without rebuilding them. However, if you modify those samples, you must rebuild them before running them.
For information on building the samples on a host Linux PC (x86), see Setting Up Cross-Platform Support.
Build and run the samples by following the procedures in this document:
$ export DISPLAY=:0
If you have already installed these libraries, you can skip the following steps.
https://developer.nvidia.com/embedded/downloads
$ chmod +x ./JetPack-L4T-<version>-linux-x64.run $ ./JetPack-L4T-<version>-linux-x64.run
_installer
folder.$ cd /usr/lib/aarch64-linux-gnu $ sudo ln -sf libv4l2.so.0 libv4l2.so
| Directory Location Relative to ll_samples/samples | Description | |------------------------------------------------—|----------—| | 00_video_decode (video decode) | Decodes H.264, H.265, VP8, VP9, MPEG4, and MPEG2 video from a local file and then shares the YUV buffer with egl renderer. | | 01_video_encode (video encode) | Encodes YUV bitstream from local file and then write elementary H.264/H.265 into file. | | 02_video_dec_cuda (CUDA processing with decode) | Decodes H.264/H.265 video from a local file and then shares the YUV buffer with CUDA to draw a black box in the left corner. | | 03_video_cuda_enc (CUDA processing with encode) | Use CUDA to draw a black box in the YUV buffer and then feeds it to video encoder to generate an H.264/H.265 video file. | | 04_video_dec_trt (TensorRT video decode) | Uses simple TensorRT calls to save the bounding box info to a file. | | 05_jpeg_encode (JPEG encode) | Uses libjpeg-8b
APIs to encode JPEG images from software-allocated buffers. | | 06_jpeg_decode (JPEG decode) | Uses libjpeg-8b
APIs to decode a JPEG image from software-allocated buffers. | | 07_video_convert (NvBufSurface conversion) | Uses V4L2
APIs to do video format conversion and video scaling. | | 08_video_dec_drm (Direct Rendering Manager) | Uses the NVIDIA® Tegra® Direct Rendering Manager (DRM) to render video stream or UI. | | 09_argus_camera_jpeg (libargus & libjpeg-8b) | Simultaneously uses Libargus API to preview camera stream and libjpeg-8b APIs to encode JPEG images. | | 10_argus_camera_recording (libargus capture) | Gets the real-time camera stream from the Libargus API and feeds it into the video encoder to generate H.264/H.265 video files. | | 11_video_osd (video osd) | Draws rectangle/text/line/circle/point onto video with CPU/GPU mode. | | 12_v4l2_camera_cuda (camera capture CUDA processing) | Captures images from a V4L2 camera and shares the stream with CUDA engines to draw a black box in the upper left corner. | | 13_argus_multi_camera (multi image capture & composite) | Captures multiple cameras and composites them to one frame. | | 14_multivideo_decode (multi video decode) | Decodes multiple H.264, H.265, VP8, VP9, MPEG4, and MPEG2 videos from local files and writes YUV buffer into corresponding files. | | 15_multivideo_encode (multi video encode) | Encodes multiple YUV bitstreams from local files and writes elementary H.264/H.265/VP8/VP9 into corresponding files. | | 16_multivideo_encode (multi video transcode) | Transcodes multiple bitstreams from local files and writes elementary H.264/H.265/VP8/VP9 into corresponding files. | | 17_frontend (TensorRT multichannel video capture) | Performs independent processing on four different resolutions of video capture coming directly from camera. | | 18_v4l2_camera_cuda_rgb (CUDA format conversion) | Uses V4L2 image capturing with CUDA format conversion. | | unittest_samples/camera_unit_sample (capture with libv4l2_nvargus) | Unit level sample; uses libv4l2_nvargus to preview camera stream. | | unittest_samples/decoder_unit_sample (video decode unit sample) | Unit level sample; decodes H.264 video from a local file and dumps the raw YUV buffer. | | unittest_samples/encoder_unit_sample (video encode unit samples) | Unit level sample; encodes YUV bitstream from a local file and writes elementary H.264 bitstream into file. | | unittest_samples/transform_unit_sample (nvbuf_utils pixel format conversion) | Unit level sample; uses the nvbuf_utils utility to convert one colorspace YUV bitstream to another. | | backend (video analytics) | Performs intelligent video analytics on four concurrent video streams going through a decoding process using the on chip decoders, video scaling using on chip scalar, and GPU compute. |
For details on each sample's structure and the APIs they use, see Sample Applications in this reference.