Test Plan and Validation
Flash and Boot
This section provides information about the Flash and Boot test cases.
Check the Flash and Boot Using the flash.sh Script
To flash the target device:
Place the target device into recovery mode.
Power on the carrier board and hold the Recovery button.
Press the RESET button.
Note
You can also run the topo
command to place device in recovery mode as mentioned in step 3. The topo
command is supported only in Jetson Orin.
If you did not place the target device in recovery mode manually, run the topo
command.
To download the customer release build from the server, run the following command:
(Optional) Start the serial console to check boot logs.
Flash the device.
Expected Result: The device will boot correctly without a crash or kernel panic. Refer to FlashingSupport for more information.
Check the Flash and Boot SDKM
Note
The following steps mention flashing of NVIDIA shared image on Jetson Platform. If you have a custom BSP, flash it first and then use SDKM only to install additional SDK packages.
Go to https://developer.nvidia.com/nvidia-sdk-manager and download sdkmanager.
To install sdkmanager on your Ubuntu installation, run the following command.
To install sdk manager using .deb on your system run the following command.
In a terminal window, to start SDK Manager, run the
sdkmanager
command.In the SDK Manager launch screen, select the appropriate login tab for your account type: - NVIDIA Developer (developer.nvidia.com) - NVONLINE (partners.nvidia.com)
To connect serial console, run the
sudo minicom -c on -w -D /dev/ttyACM0
command.When you launch sdkmanager in step 1, your connected device should be identified/listed, for example,
jetson AGX Orin
.To the target device into reset/recovery mode, complete the following tasks:
Make sure device is power ON, and if it is not, press the power button.
Press and hold the Recovery(RCM) button.
Press the Reset button.
Release the Recovery(RCM)* button
Select Host Machine, Target Hardware(Jetson AGX Orin), or the latest available version of JetPack.
Select Packages to install.
Select ALL the Product Category to be used to validate other feature use cases) continue to step 3.
Select Manual Setup - Jetson Agx Orin, Emmc as the storage device, and the user name/password to be set for the flashed image.
10.To debug a failure in the terminal tab in SDK manager, complete the steps in the SDK Manager User Guide<https://docs.nvidia.com/sdk-manager/index.html>_.
Select install.
To verify the flash, log in to the device.
Verify that the installation should completed successfully.
Expected Result: The device boot should complete without any crash or kernel panic, and all JetPack components should be installed properly on boot. Refer to https://docs.nvidia.com/sdk-manager/install-with-sdkm-jetson/index.html for more information.
NVME Boot
Before you begin, ensure that you have an NVME with the minimum of 16GB.
To flash the target device:
To place the target device in recovery mode, complete the following tasks:
Power on the carrier board and keep the Recovery button pressed.
Press the Reset button.
Note
You can also use the topo
command to place the device in recovery mode.
If you have not yet manually placed the target device in recovery mode, run the topo
command.
Download the customer release build from server.
(Optional) To check boot logs, start the serial console.
Flash the device.
Complete the set up, for example, typing the user name, the location, the timezone, and so on.
Log in to system.
NFS Boot
install the packages for NFS.
Complete the following steps to flash the target device:
Place the target device into recovery mode.
Power on the carrier board and press the Recovery button.
Press the Reset button.
Note
You can also use the
topo
command to place device in recovery mode.Download the customer release build from the server.
If you have not yet manually placed the target device in recovery mode, run the
topo
command.(Optional) To check boot logs, start the serial console.
Add the rootfs path to the
/etc/exports
directory.Restart the NFS service.
Refer to To RCM boot to NFS for more information.
Just to make sure your NFS share is visible to the client, run the following command on the NFS server.
Run the
rcm boot
command.For example,
sudo ./flash.sh -N 10.24.212.249:$HOME/generic_release_aarch64/Linux_for_Tegra/Linux_for_Tegra/rootfs --rcm-boot jetson-agx-orin-devkit eth0
Complete the set up.
Log in to system.
Expected Result: The device should boot properly without a crash or kernel panic. Refer to https://forums.developer.nvidia.com/t/how-to-boot-from-nfs-on-xavier-nx/142505 for more information.
System Software
Detection of USB Hub (FS/HS/SS)
Boot the target.
Connect the USB Hub (FS/HS/SS) to one USB port in the target device.
Check on serial terminal.
Serial terminal will show you model and speed of the hub. You can also check the demsg logs for more information.
Connect the hub to the second port and check.
Expected Result: The USB hub should be enumerated on all USB ports of the target.
Detection of USB-3.0 Flashdrive (Hot plug-in)
Boot the target.
Connect the USB3 pendrive to one USB port of the target.
Using a serial terminal, check whether the flash drive has enumerated by running the
lsusb
command.You can also check whether the drive is listed in the file browser and check the demsg logs for more information.
Copy the files to pendrive.
Complete steps 1-4 on all USB ports on the target.
Expected Result: The USB3 Pendrive should be detected on all of the USB ports of the target.
Note
Not all ports are USB 3.0 capable, so your device might operate at lower speeds. Review the relevant developer kit documentation to determine whether USB 3.0-capable ports.
Detecting the Keyboard
Boot the target.
Connect keyboard to the device.
Using a serial terminal, check whether the keyboard has enumerated by running the
serial terminal
command.Press any key on keyboard to verify the functionality.
Expected Result: The keyboard should be successfully detected and functional.
Detecting the Mouse
Boot the target.
Connect the USB mouse to the device.
Using a serial terminal, check whether the mouse has enumerated.
Use the connected mouse to verify the functionality.
Expected Result: The mouse should be successfully detected and functional.
Cold Boot 10 Cycles
Boot the target.
Log in to the device.
Run the power down command, for example:
Press the power button.
Expected Result: The device should reboot successfully for each iterations without any kernel panic, failures, and errors.
Warm Boot 10 Cycles
Boot the target.
Log in to the device.
Run the
sudo systemctl reboot
command to warm boot the device.
Expected Result: The device should reboot successfully for each iterations without any kernel panic, failures, and errors.
Checking for Failures and Error Messages in dmesg
Boot the target.
Check the dmesg logs for failures and errors.
Expected Result: There should be no error and failures messages in the dmesg log.
Using the Display Port (DP)
Boot the target with the Display Port (DP) connected (1080P/4K).
Ensure that you see boot logo, framebuffer, and the desktop on the connected DP.
Expected Result: There should be no corruption on the display.
LP0-Resume Basic Functionality
Note
This check needs to be completed five times.
Boot the target.
Ensure that you can access the serial terminal, for example,
ttyUSB0
.On the serial terminal, run the
sudo systemctl suspend
command.
Expected Result: The device suspend prints are displayed on the serial terminal and that the display is off.
You can wake up the device by using the connected USB keyboard.
UART I/O (Debug Print Messages)
Boot the target.
Ensure that you can access the serial terminal, for example,
ttyUSB0
.
Configuring the 40-Pin Expansion Header
Start Jetson-IO, and on the developer kit, run the following command:
The options to configure the I/O are displayed.
Complete the steps in https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide.
Save and reboot.
After configuring the pins, reboot the device.
Start the jetson-io.oy and check whether the pins have been configured for the selected option.
Refer to ConfiguringTheJetsonExpansionHeaders for more information.
Jetson-Linux WatchDog Timer Enable Verification
Boot the target.
On the serial console, to crash the system, run the following command.
Check WatchDog Timer reboots device after the timeout, for example, 180 seconds.
Expected Result: The device should be rebooted by the WatchDog Timer without errors and failures.
Verifying the Boot Time Services and Scripts
Verify that all boot time scripts are correctly initialized.
To check that there are no failures because of the scripts, run the following command.
To verify that the services/scripts start without failure/errors, run the following command.
Check serial logs during the device boot-up.
Expected Result: There should not be any errors.
nvpmodel: Power Modes
Note
Nvpmodel introduces different power modes on the Jetson platforms. The selected mode determines which CPU cores are used and the maximum frequency of the CPU and GPU. Detailed specifications of the supported power modes for Jetson AGX Orin, Orin NX, and Orin Nano are listed in the section Supported Modes and Power Efficiency.
You can also find the power mode details for your device in /etc/nvpmodel.conf
.
Setting and Verifying Supported Power Modes
To check the current power mode, boot the device and run the
sudo nvpmodel -q
command.To set a desired power mode, run the
'sudo nvpmodel -m x
command.In this command,
x
is a supported power mode ID.To set the maximum supported power, run the
sudo nvpmodel -m 0
command.To check various settings for a power mode, run the command
sudo nvpmodel -q --verbose
orsudo jetson_clocks --show
.Repeat steps 1 to 4 for all supported power modes. Refer to the section Supported Modes and Power Efficiency to find the supported power modes for your device and SKU.
Expected Result: You should be able to set a supported power model. The CPU/GPU/EMC frequency should be changed based on the selected power mode.
Verify CPU, GPU, and EMC Frequencies
Perform a stress test on the system by installing and running a dedicated stress utility or glmark2. Alternatively, execute a
dd
command.Monitor CPU, GPU and EMC frequencies using the command
sudo tegrastats
orsudo jetson_clocks --show
.
Expected Result: The CPU/GPU/EMC frequencies should be as per the current power mode and they should not cross power mode’s max freq.
Verify CPU, GPU, and EMC Frequencies after Resuming from SC7 State
Put the system into SC7 state and resume.
After resuming from SC7 state, perform a stress test on the system by installing and running a dedicated stress utility or glmark2. Alternatively, execute a
dd
command.Monitor CPU, GPU and EMC frequencies using the command
sudo tegrastats
orsudo jetson_clocks --show
.
Expected Result: The CPU/GPU/EMC frequencies should be as per the current power mode and they should not cross power mode’s max freq.
Verify Power Mode Is Preserved after Cold Boot
Turn off the device using
sudo systemctl poweroff
orsudo shutdown -h now
.Boot the device by pressing the power button.
After reboot, check the current power mode and CPU/GPU/EMC frequencies using the command
sudo tegrastats
orsudo jetson_clocks --show
.Perform a stress test on the system by installing and running a dedicated stress utility or glmark2. Alternatively, execute a
dd
command.Check the current power mode and CPU/GPU/EMC frequencies using the command
sudo tegrastats
orsudo jetson_clocks --show
.Repeat for 5 cold boot cycles.
Expected Result: The power mode should not change after cold boot. The CPU/GPU/EMC frequencies should be as per the power mode.
Verify Power Mode Is Preserved after Warm Boot
Warm reboot the device using ‘sudo reboot’ or ‘sudo shutdown -r now’.
After reboot, check the current power mode and CPU/GPU/EMC frequencies using the command
sudo tegrastats
orsudo jetson_clocks --show
.Perform a stress test on the system by installing and running a dedicated stress utility or glmark2. Alternatively, execute a
dd
command.Check the current power mode and CPU/GPU/EMC frequencies using the command
sudo tegrastats
orsudo jetson_clocks --show
.
Expected Result: The power mode should not change after warm boot. The CPU/GPU/EMC frequencies should be as per the power mode.
Running the tegrastats Command
Run the
tegrastats
command.
Expected Result: The specification values for active CPUs and GPUs should be reflected in the output:
07-11-2023 07:10:08 RAM 4601/63895MB (lfb 3x64MB) CPU [0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729] GR3D_FREQ 0% cv0@0C cpu@45.062C tboard@33.25C soc2@41.062C tdio
de@38.625C soc0@41.656C cv1@0C gpu@0C tj@45.062C soc1@40.781C cv2@0C VDD_GPU_SOC 3065mW/3065mW VDD_CPU_CV 766mW/766mW VIN_SYS_5V0 3623mW/3623mW NC 0mW/0mW VDDQ_VDD2_1V8AO 503mW/503mW NC 0mW/0mW
Graphics
Running a Graphics Application Using X11
Connect the display to the target and boot the target.
If GDM is enabled, log in to the device via the Desktop interface. This step is also necessary to run X11 graphics binaries over SSH. If desired, automatic log in can be enabled by editing
/etc/gdm3/custom.conf
and addingAutomaticLoginEnable=True
andAutomaticLogin=<username>
under the[daemon]
section.Export the DISPLAY variable, for example
export DISPLAY=:0
.To check for various display modes that are supported by the display, run the following command:
$ xrandr
Run the
glxgears
sample application by executing the following command. Verify that the frames per second matches the display information in the previous step.$ glxgears -fullscreen
Run the
bubble
graphics application by executing the following command.$ /usr/src/nvidia/graphics_demos/prebuilts/bin/x11/bubble
Expected Result: The graphics applications should render successfully on the display, and no corruption or hang should be observed while rendering. Refer to Graphics for more information.
Running a Graphics Application Using EGLdevice (DRM)
Connect the display to the target and boot the target.
Stop GDM and if
X
is running on the target, kill it.$ sudo service gdm stop (Stop GDM) $ sudo pkill x
Load the NVIDIA drm module.
For Jetson AGX Orin:
$ sudo modprobe nvidia_drm modeset=1
Run the
bubble
graphics application by executing the following command.
Expected Result: The graphics application should render successfully on the display. There should be no corruption or hang while rendering.
Running a Graphics Application Using Wayland
Connect the display to the target and boot it after flashing.
If
X
is running on the target, kill it.$ sudo service gdm stop (Stop GDM) $ sudo pkill x3. Launch wayland using below commands. $ unset DISPLAY
Run the following commands.
$ export WESTON_TTY=1 $ sudo XDG_RUNTIME_DIR=/tmp/xdg weston --tty=$WESTON_TTY --idle-time=0 &
Press Enter.
Run the
bubble
graphics application by executing the following command.$ sudo XDG_RUNTIME_DIR=/tmp/xdg /usr/src/nvidia/graphics_demos/prebuilts/bin/wayland/bubble
Expected Result: The graphics binary should render successfully on the display. No corruption or hang should be observed while rendering.
Refer to Graphics for more information.
Kernel
Checking the Kernel Version
Boot the device.
To determine the kernel version number, run the
uname -r
command.
Expected Result: The Kernel version should display accordingly, for example, 5.16.0-tegra-g44acfbed970e.
Verifying Unloading of Kernel Modules Using modprobe
Log in to the device.
To locate the loaded, active module, run the
lsmod
command.The output will list the active, and other dependent, modules (for example,
x_tables 49152 5 xt_conntrack,iptable_filter,xt_addrtype,ip_tables,xt_MASQUERADE
).To remove this module, run the ‘sudo modprobe -r x_tables` command.
The following error message will display:
modprobe: FATAL: Module x_tables is in use
This message is expected because the module is being used by other modules that appear in the
lsmod
output against x_tables. You have to remove all modules that depend on x_tables and then remove the x_tables.Remove modules that have no dependent modules.
This step will not throw an error like rtl8822ce.
Test the modules without dependencies, where “used by” is 0.
After the module is removed, removing it again will not print anything, for example, rtl8822ce , userspace_alert, so this outcome is expected.
Expected Result: Unloading of the module should happen without failure. No error/failure/warning/long delay should happen during, or after, the process, and the system should remain stable.
Verifying the Previous Hang log - last_kmsg/console-ramoops
To checks whether the console-ramoops, which are also known as the last_kmsg
file, is being generated after a reboot when a system hang happens.
Power off the device using the Power button or by running the using command
sudo poweroff
command.Manually power the device on again.
After the system boots up, ensure that there is no system node (
/sys/fs/pstore/console-ramoops
).To complete a typical boot, run the
$ sudo reboot
command.After the system boots up, run the
sudo cat /sys/fs/pstore/console-ramoops-[0]
command and check whether logs are being dumped into the generated file.To trigger a kernel panic, run the
$ sudo bash -c echo c > /proc/sysrq-trigger
command.Reboot device manually.
After system boots up, to check whether
console_ramoops
is generated, run thesudo cat /sys/fs/pstore/console-ramoops-[0]
command.The output should show show the watchdog timeout kernel messages.
Check for
dmesg-ramoops-[0] logs
for the dmesg logs.console-ramoops-x
wherex
is the numeric value generated run time.
Expected Result: When a system hangs, a console_ramoops file is generated under /sys/fs/pstore
with enough information about previous hang.
Check DVFS Scaling
This procedure allows you to check whether Dynamic Voltage/Frequency Scaling (DVFS) and EMC scaling are working.
Before you begin:
Ensure that
jetson_clocks
is not active. (You can just reboot the device).Verify that the nvpmodel setting is Max-N.
To verify this setting, run the
sudo nvpmodel -m 0
command.
Keep device idle for five minutes.
Display and note down the CPU frequency values.
cat /sys/devices/system/cpu/cpu[0-9]/cpufreq/cpuinfo_cur_freq
Run CPU workload, such as the system benchmark, for example, SpecInt.
Observe change in freq values.
Expected Result: The values should be reflected in scaling nodes. To check change in CPU/GPU frequencies, you can also run the tegrastats
command instead of the frequency scaling nodes.
CPU-therm System Throttle Alert Check
NVIDIA is providing users UI notifications when CPU temperature reaches the trip point. The CPU hot/thermal throttle alert persistent toast message appears in the upper right corner and !
appears in the task bar.
To raise the temperature of the device, run multiple apps/benchmark on the device for long time.
When the CPU temperature reaches the trip point, the CPU thermal warning toast message will appear.
Expected Result: You should see Hot surface alert when CPU temperature reaches the trip point. You should also see throttle-alert cooling state alerts on serial console. Refer to ThermalSpecifcations for more information.
Camera
Before you begin:
Install the v4l2 utility on the device.
Set the
sensor-id
based on the output fromv4l2-ctl --list-devices
(for example, /dev/video<0/1/2> where 0/1/2 are the sensor-id identified by v4l2-ctl
Device: Test Image Capture with Camera Devices
Start the argus camera app with every camera device capture image.
View the captured image.
Expected Result: You should be able to start the camera and capture the image. Refer to AcceleratedGstreamer for more information.
Device: Test Video Capture with Camera Devices
Start the argus camera app with every camera device capture video.
View the captured video.
Expected Result: You should be able to start the camera and capture the image. Refer to AcceleratedGstreamer for more information.
Turn on screen reader support.
To enable screen reader support, press Ctrl+Alt+Z. To learn about keyboard shortcuts, press Ctrl+slash for more information.
Verifying IMX274 Camera Sensor
To connect the IMX274 dual camera module to the target, run one of the following commands:
nvargus_nvraw --sensorinfo --c <sensor-id1>
For example,
nvargus_nvraw --sensorinfo --c 0
nvargus_nvraw --sensorinfo --c <sensor-id2>
For example,
nvargus_nvraw --sensorinfo --c 1
Verify that both sensors are detected.
Capturing a JPEG Image from Each Sensor
To capture a .jpeg image from each sensor, run one of the following commands:
nvargus_nvraw --c 0 --format jpg --file ${HOME}/frame-cam0
nvargus_nvraw --c 1 --format jpg --file ${HOME}/frame-cam1
Expected Result: You should be able to capture the image.
Comms
WiFi AP Connectivity with WPA2 Security
Boot the device.
Open the GUI Wi-Fi settings and connect to AP with WPA2 security.
Expected Result: You should be able to connect selected Wi-Fi AP.
MM Content (YouTube|(TM) 1080p) Streaming Over WiFi
Connect to the Wi-Fi AP through the GUI or the command-line interface (CLI).
Ensure that the Ethernet cable is disconnected.
Start the Chrome (TM) browser on the target.
Play any 1080P video on YouTube.
Expected Result: This test checks the Wi-Fi connectivity, and YouTube video playback should happen.
Setting up WiFi AP Connection over the Command-Line Interface
Flash the build that has no GUI installed.
If you have ubuntu-desktop, disable WiFi using WiFi settings after the boot is complete.
Boot the device and connect it to an AP using the command-line interface (CLI).
$ ifconfig -a
Note
If the WiFi is soft/hard blocked by the
rfkill run
command, runsudo rfkill unblock all
.Identify WiFi interface and configure it.
$ iwconfig $ sudo ifconfig wlan0 up
$ sudo iwlist wlan0 scan | grep ESSID $ sudo apt install wpasupplicant $ wpa_passphrase YOUR_AP_NAME PASSWORD | sudo tee /etc/wpa_supplicant.conf $ sudo wpa_supplicant -c /etc/wpa_supplicant.conf -i wlan0 8. Open another terminal: $ sudo systemctl stop NetworkManager $ iwconfig $ ifconfig wlan0 $ sudo ifconfig wlan0 up $ sudo dhclient wlan0 $ ifconfig wlan0 $ ifconfig -a
Verify network connectivity.
$ ping -I wlan0 8.8.8.8 (you can test your local IP)
Expected Result: The Wi-Fi should be turn on and be connected to AP, and the connection should be consistent and free from any drops.
Bluetooth Pairing and Unpairing
Boot the device.
Open the Bluetooth GUI settings.
Check nearby Bluetooth devices.
Pair the selected device, for example, the Bluetooth Keyboard.
To disconnect device, double click on connected device, and turn off the connection.
To permanently remove the device, click Remove disconnected device .
Expected Result: You should be able to pair selected device, for example the Bluetooth keyboard, and ensure that the connected device works properly.
Ethernet LAN Connectivity
Boot the device with the Ethernet cable connected to the device’s Ethernet port.
In a terminal window, ping 8.8.8.8. For example, if the Ethernet interface is
eno1
, execute the following command.$ ping -I eno1 8.8.8.8
Expected Result: You should be able to ping www.google.com without any packet loss.
Ethernet LAN Hot-plug
Boot the device with the Ethernet cable to the device’s Ethernet port.
Disconnect and reconnect the Ethernet cable.
In a terminal window, ping 8.8.8.8. For example, if the Ethernet interface is
eno1
, execute the following command.$ ping -I eno1 8.8.8.8
Expected Result: You should be able to ping www.google.com without any packet loss.
Ethernet LAN Bandwidth
Boot the device with the Ethernet cable connected to the device’s Ethernet port.
Check the device’s Ethernet IP address. For example, if the Ethernet interface is
eno1
, execute the following command to see the IP address.$ ip addr show eno1 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 48:b0:2d:78:83:46 brd ff:ff:ff:ff:ff:ff inet 192.168.1.180/24 brd 192.168.1.255 scope global dynamic eno1 valid_lft 77658sec preferred_lft 77658sec inet6 fe80::cbba:a3a8:ccf5:d0e8/64 scope link noprefixroute valid_lft forever preferred_lft forever
Check the Ethernet line speed (reported in Mbps). For example, if the Ethernet interface is
eno1
, execute the following command to check the line speed.$ cat /sys/class/net/eno1/speed 1000
Install
iperf3
on the target device and a host machine on the same network. To installiperf3
on the target device, execute the following commands.$ sudo apt update $ sudo apt install iperf3
Start
iperf3
server on the target.$ iperf3 -s
Start
iperf3
client on the host machine.$ iperf3 -c <target-ip-address> -P8 -t 60
Expected Result: The bandwidth reported by iperf3
should be close to the line speed of the ethernet connection.
Multimedia Encode/Decode
Before you begin:
nvidia-l4t-gstreamer must be installed to run GStreamer pipelines. Refer to SoftwarePackagesAndTheUpdateMechanism for more information.
Log in to the device and open a terminal window.
Verify that the MM sample files are available on the device.
If required, the 4K display should be connected by the test case.
Ensure that your HDMI TV is connected, and the X server and the Ubuntu desktop are running.
Camera capture using GStreamer
For enabled ISP processing for CSI cameras or Bayer captures, use the nvarguscamerasrc
GStreamer plugin.
Before you begin, ensure the camera is connected and is working. Refer Camera for more information.
Note
Set the sensor-id
based on the output by running the v4l2-ctl --list-devices
command.
Capture an image
Run the following command.
$ gst-launch-1.0 nvarguscamerasrc num-buffers=1 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12' ! nvjpegenc ! filesink location=${HOME}/gst-frame-cam0.jpg $ gst-launch-1.0 nvarguscamerasrc num-buffers=1 sensor-id=1 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12' ! nvjpegenc ! filesink location=${HOME}/gst-frame-cam1.jpg
Expected Result: You should be able to capture the image.
Capturing a Motion-JPEG Stream
Run the following command.
$ gst-launch-1.0 nvarguscamerasrc num-buffers=300 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvjpegenc ! avimux ! filesink location=${HOME}/mjpeg.avi -e
Expected Result: The motion-JPEG stream should get captured without crashes or errors.
Preview the Camera Stream
Run the following command.
$ gst-launch-1.0 nvarguscamerasrc num-buffers=300 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvegltransform ! nveglglessink sync=0
Expected Result: You should be able to stream without crashes or errors.
Capturing Video from the Camera and Record
Run the following command.
$ gst-launch-1.0 nvarguscamerasrc num-buffers=300 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=<filename_h264>.mp4
Expected Result: You should be able to playback the stream without crashes and errors.
Encode using GStreamer
Run one of the following commands.
$ gst-launch-1.0 nvarguscamerasrc num-buffers=300 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=<filename_h265>.mp4 $ gst-launch-1.0 videotestsrc num-buffers=300 ! 'video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=<filename_h264>.mp4 -e $ gst-launch-1.0 filesrc location=<filename_1080.yuv>! videoparse width=1920 height=1080 format=2 framerate=30 ! 'video/x-raw, format=(string)I420' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvv4l2av1enc ! matroskamux ! filesink location=<filename_av1>.mkv -e
Expected Result: The encoded stream should be correct, and there should be no corruption the stream.
Decode using GStreamer
Run one of the following commands.
$ gst-launch-1.0 filesrc location=<filename_h264>.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nv3dsink -e $ gst-launch-1.0 filesrc location=<filename_h265>.mp4 ! qtdemux ! queue ! h265parse ! nvv4l2decoder ! nv3dsink -e $ gst-launch-1.0 filesrc location=<filename_av1>.webm ! matroskademux ! queue ! nvv4l2decoder ! nv3dsink -e
Expected Result: There should be no corruption or buffer drops during playback.
JPEG Decode using GStreamer
You can complete this task in one of the following ways:
Using
nv3dsink
$ gst-launch-1.0 -v filesrc location=<JPEG_IMAGE_LOCATION><IMAGE_NAME>.jpg ! jpegparse ! nvjpegdec ! nvvidconv ! 'video/x-raw(memory:NVMM), format=RGBA' ! nv3dsink
Using
nveglglessink
$ gst-launch-1.0 -v filesrc location=<JPEG_IMAGE_LOCATION><IMAGE_NAME>.jpg ! jpegparse ! nvjpegdec ! nvegltransform ! nveglglessink
Expected Result: The JPEG decoding should be correct, and there should be no corruption with the decoded image.
JPEG Encode using GStreamer
Run the following command.
$ gst-launch-1.0 videotestsrc num-buffers=1 ! 'video/x-raw, width=1920, height=1080, format=(string)I420' ! nvvidconv ! 'video/x-raw(memory:NVMM)' ! nvjpegenc ! filesink location=${HOME}/frame.jpg
Expected Result: JPEG encode should be be correct, and there should be no corruption with the decoded image.
Transform using GStreamer
nvvidconv
can be used to perform video format conversion, scaling, and cropping operations. Refer to AcceleratedGstreamer for more information.
Run one of the following commands.
$ gst-launch-1.0 videotestsrc num-buffers=100 ! 'video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)720' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=<test>.mp4 $ gst-launch-1.0 videotestsrc num-buffers=100 ! 'video/x-raw, format=(string)I420, width=(int)1280, height=(int)720' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420' ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=<test>.mp4
Expected Result: The encoded files should be correct with no corruption.
Transcode using GStreamer
Run one of the following commands.
$ gst-launch-1.0 filesrc location=<filename_1080p_h264.mp4> ! qtdemux ! h264parse ! nvv4l2decoder ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=<Transcoded_filename>.mp4 -e $ gst-launch-1.0 filesrc location=<filename_1080p_h265.mp4> ! qtdemux ! h265parse ! nvv4l2decoder ! nvv4l2av1enc ! matroskamux ! filesink location=<Transcoded_filename>.mkv -e
Expected Result: The encoded file stream should run correctly with no corruption.
Video Playback using Application
This procedure verifies that the playback of a 4K video with H265 codec is successful using nvgstplayer-1.0.
Run the following command.
$ nvgstplayer-1.0 -i <H265_FILE_NAME>.webm --stats
Expected Result: Video playback should be smooth without any corruption and dropped frames.
MP3 Playback Test to Verify that MP3 Playback is Successful Using nvgstplayer-1.0
Run the following command.
$ nvgstplayer-1.0 -i MP3_file.mp3 --stats
Expected Result: The MP3 playback should run correctly with no glitches or corruption.
MP3 Streaming (Stream the MP3 file from an HTTP Server)
Before you begin, ensure that the display is connected and the X server and the Ubuntu desktop are running.
Ensure that the host MP3 file is on the HTTP server for streaming.
Log in to the device and open a terminal window.
Navigate to the directory where MP3 file is located and run the following command.
$ python3 -m http.server 8001 &
Download or copy the MP3 image to the device.
Run the following command.
$ nvgstplayer-1.0 -i http://<IP_ADDR_OF_DEVICE>:8001/MP3_file.mp3 --stats
Expected Result: The Audio streaming should be with out noise and breaks, and there should be no hangs or crashed while streaming.
Streaming Audio and Video File from the HTTP Server
Before you begin, ensure that the display is connected and the X server and the Ubuntu desktop are running.
Ensure that the host MP3 file is on the HTTP server for streaming.
Log in to the device and open a terminal window.
Navigate to the directory where MP3 file is located and run the following command.
$ python3 -m http.server 8001 &
Download or copy the audio and video image to the device.
Run the following command.
$ nvgstplayer-1.0 -i http://<IP_ADDR_OF_DEVICE>:8001/<VP9_FILE_NAME>.webm --stats
Expected Result: Video streaming should be with out distortion or issues, and there should be no hang or crash while streaming.
AUDIO+VIDEO rtsp streaming:H264+AAC
This procedure allows you to stream the clip link from an rtsp server.
Open the browser.
Stream the content from the rtsp server.
Open the link, for example, http://your_streaming_site.com/rtsp-server.html.
Click the AUDIO+VIDEO test file and stream.
Expected Result: Video streaming should have no distortion or issues, and there should be no hang or crash while streaming.
Camera Argus Samples
Compiling Argus SDK Samples Compilation and Running _cudahistogram
Run the following command.
cd /usr/src/jetson_multimedia_api/argus
Run the following command.
sudo apt-get install cmake; sudo apt-get install build-essential; sudo apt-get install pkg-config; sudo apt-get install libx11-dev; sudo apt-get install libgtk-3-dev; sudo apt-get install libexpat1-dev; sudo apt-get install libjpeg-dev; sudo apt-get install libgstreamer1.0-dev
Run the following command.
sudo mkdir build
Run the following command.
cd build
Run the following command.
sudo cmake ..
Run the following command.
cd samples/cudaHistogram
Run the following command.
sudo make
Run the following command.
sudo make install
Expected Result: No failures should be observed during the compilation, and the sample binary should exist and run without issues.
Compiling Argus SDK Samples and Running _gstvideoencode
Run the following command.
cd /usr/src/jetson_multimedia_api/argus
Run the following command.
sudo apt-get install cmake; sudo apt-get install build-essential; sudo apt-get install pkg-config; sudo apt-get install libx11-dev; sudo apt-get install libgtk-3-dev; sudo apt-get install libexpat1-dev; sudo apt-get install libjpeg-dev; sudo apt-get install libgstreamer1.0-dev
Run the following command.
sudo mkdir build
Run the following command.
cd build
Run the following command.
sudo cmake ..
Run the following command.
cd samples/gstVideoEncode
Run the following command.
sudo make
Run the following command.
sudo make install
Expected Result: No failures should be observed during compilation, and the sample binary should exist and run without issues.
Argus SDK samples compilation and run _multisensor
Before you begin, ensure that two camera sensors are connected to the device.
Run the following command.
cd /usr/src/jetson_multimedia_api/argus
Run the following command.
sudo apt-get install cmake; sudo apt-get install build-essential; sudo apt-get install pkg-config; sudo apt-get install libx11-dev; sudo apt-get install libgtk-3-dev; sudo apt-get install libexpat1-dev; sudo apt-get install libjpeg-dev; sudo apt-get install libgstreamer1.0-dev
Run the following command.
sudo mkdir build
Run the following command.
cd build
Run the following command.
sudo cmake ..
Run the following command.
cd samples/multiSensor
Run the following command.
sudo make
Run the following command.
sudo make install
Expected Result: No failures should be observed during compilation, and the sample binary should exist and run without issues.
Web Camera capture using GStreamer
USB cameras, Bayer sensors, and YUV sensors output YUV images without ISP processing and do not use the NVIDIA camera software stack.
As a result, the OSS GStreamer v4l2src
plugin is used for streaming.
Before you begin:
Ensure that
gst-launch
is available on the device with all dependencies installed.identify the
/dev/videX
interface for the USB web cam.
Capturing Video from a USB Web Camera and Recording the Video
Capture the video from the USB web cam in the MP4 format.
$ gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-raw, width=640, height=480, format=(string)YUY2, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvv4l2h264enc bitrate=8000000 ! h264parse ! qtmux ! filesink location=${HOME}/test.mp4 -e 2> /dev/null
Decode and render the captured video.
$ gst-launch-1.0 filesrc location=${HOME}/test.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvegltransform ! nveglglessink sync=0 2> /dev/null
Capture a video from USB web cam in the mjpeg.avi format.
$ gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-raw, width=640, height=480, format=(string)YUY2, framerate=(fraction)30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvjpegenc ! avimux ! filesink location=${HOME}/mjpeg.avi -e 2> /dev/null
Decode and render the captured video.
$ gst-launch-1.0 filesrc location=${HOME}/mjpeg.avi ! avidemux ! nvv4l2decoder mjpeg=true ! nvegltransform ! nveglglessink sync=0 2> /dev/null
Expected Result: Video Capture and Video Encode should be successful using the USB web cam.
Capturing and displaying the Video from a USB Web Camera
Capture the video from the USB web cam and display it.
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-raw, width=640, height=480, format=(string)YUY2, framerate=(fraction)30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvegltransform ! nveglglessink sync=0 2> /dev/null
Expected Result: The captured Video should successfully display.
Capturing Videos from a USB web cam and Running it Through TRT
Capture a video from the USB web cam and run it through TRT.
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-raw, width=640, height=480, format=(string)YUY2, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! m.sink_0 nvstreammux width=640 height=480 name=m batch-size=1 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch_size=1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvdsosd process-mode=1 ! nvegltransform ! nveglglessink sync=0 2> /dev/null
Expected Result: The captured video should successfully run through TRT.
NVIDIA Containers
Install Container Engine and NVIDIA Container Toolkit
Install a supported container engine
(Docker, Containerd, CRI-O, Podman)
for your Linux distribution.Install the NVIDIA Container Toolkit: Refer to the instructions here.
Configure container engine: Refer to the instructions here.
Run JetPack Container
Pull the JetPack Container.
# For docker sudo docker pull nvcr.io/nvidia/l4t-jetpack:r36.3.0 # For podman podman pull nvcr.io/nvidia/l4t-jetpack:r36.3.0
Run the JetPack Container.
# For docker sudo docker run --rm -it \ -e DISPLAY --net=host \ --runtime nvidia \ -v /tmp/.X11-unix/:/tmp/.X11-unix \ -v ${HOME}/cuda-samples:/root/cuda-samples \ nvcr.io/nvidia/l4t-jetpack:r36.3.0 /bin/bash # For podman podman run --rm -it \ -e DISPLAY --net=host \ --device nvidia.com/gpu=all \ --group-add keep-groups \ --security-opt label=disable \ -v ${HOME}/cuda-samples:/root/cuda-samples \ nvcr.io/nvidia/l4t-jetpack:r36.3.0 /bin/bash
CUDA Samples
Before you begin: Get the CUDA samples and set up DISPLAY.
Install
git
on the device.Get the CUDA 12 samples:
cd ${HOME} git clone -b v12.2 https://github.com/NVIDIA/cuda-samples.git
Log in to the display and check the display TTY using the command
w
.11:54:08 up 3 min, 2 users, load average: 0.33, 0.14, 0.05 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT ubuntu ttyS0 - 11:50 0.00s 0.09s 0.04s w ubuntu :1 :1 11:53 ?xdm? 39.26s 0.04s /usr/lib/gdm3/g
Set up the DISPLAY environment variable based on the output in step 1.
export DISPLAY=:1 xhost +local:
Build and Run CUDA Samples Natively on Target Device
Before you begin: Pre-condition
CUDA 12.2 should be installed on device with all dependencies.
Build CUDA samples.
sudo apt-get install libglfw3 sudo apt-get install libglfw3-dev cd ${HOME}/cuda-samples make clean make -j$(nproc)
Expected Result: No error/failure should be observed during the compilation, and an executable binary file should appear after the compilation is complete. Refer to https://docs.nvidia.com/cuda/cuda-samples/index.html#getting-started-with-cuda-samples for more information.
Go to the section Run CUDA Samples and run the given commands on the target.
Build and Run CUDA Samples in Container on Target Device
Run the JetPack container using the instructions in the section Run JetPack Container.
Build the CUDA samples within it.
apt update && apt install -y libglfw3 libglfw3-dev libdrm-dev pkg-config cmake cd ${HOME}/cuda-samples make clean make -j$(nproc)
Expected Result: No error/failure should be observed during the compilation, and an executable binary file should appear after the compilation is complete. Refer to https://docs.nvidia.com/cuda/cuda-samples/index.html#getting-started-with-cuda-samples for more information.
Go to the section Run CUDA Samples and run the given commands within the container.
Run CUDA Samples
The following instructions to run CUDA samples can be executed natively on the target or within the JetPack container.
Run Bandwidth test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./bandwidthTest
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run the device query test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./deviceQuery
Expected Result: No error/failure should be observed and the sample application should run successfully.
Run simpleGL test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./simpleGL
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run boxFilter test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./boxFilter
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run nbody test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./nbody
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run smokeParticles test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./smokeParticles
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run particles test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./particles
Expected Results: No error/failure should be observed, and sample application should run successfully.
Run FDTD3d test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./FDTD3d
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run simpleCUBLAS test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./simpleCUBLAS
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run batchCUBLAS test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./batchCUBLAS
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run .simpleCUFFT test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./simpleCUFFT
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run MersenneTwisterGP11213 test sample applications.
cd ${HOME}/cuda-samples/bin/aarch64/linux/release ./MersenneTwisterGP11213
Expected Result: No error/failure should be observed, and the sample application should run successfully.
Run cuDNN Samples
The following instructions to build and run the cuDNN samples can be executed natively on the target or within the JetPack container.
Build and run the Converted sample.
cd /usr/src/cudnn_samples_v8 cd conv_sample sudo make -j8 sudo chmod +x run_conv_sample sudo ./run_conv_sample.sh
Build and run the mnistCUDNN sample.
cd /usr/src/cudnn_samples_v8 cd mnistCUDNN sudo make -j8 sudo chmod +x mnistCUDNN sudo ./mnistCUDNN
Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after the compilation and run without issues. Refer to https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#verify for more information.
TensorRT Samples
The following instructions to build and run the TensorRT samples can be executed natively on the target or within the JetPack container.
Build TensorRT samples.
mkdir ${HOME}/tensorrt-samples ln -s /opt/nvidia/tensorrt/data ${HOME}/tensorrt-samples/data cp -a /opt/nvidia/tensorrt/samples ${HOME}/tensorrt-samples/ cd ${HOME}/tensorrt-samples/samples make clean make
Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after the compilation. Refer to https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html for more information.
Run the TRT sample (sample algorithm selector)
Run the following command.
cd ${HOME}/tensorrt-samples/bin ./sample_algorithm_selector
Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html for more information.
Run TRT sample (sample_onnx_mnist)
Run the following command.
cd ${HOME}/tensorrt-samples/bin ./sample_onnx_mnist
Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html for more information.
Run a TRT sample (sample_onnx_mnist useDLACore=0)
Run the following command.
cd ${HOME}/tensorrt-samples/bin ./sample_onnx_mnist --useDLACore=0
Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues.
Run a TRT sample (sample_onnx_mnist useDLACore=1)
Run the following command.
cd ${HOME}/tensorrt-samples/bin ./sample_onnx_mnist --useDLACore=1
Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues.
TRT + MM
Test video decode and TensorRT object detection with output rendered to the display.
Run one of the following commands.
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux width=1280 height=720 name=m batch-size=1 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch_size=1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvdsosd process-mode=1 ! nv3dsink sync=0 2> /dev/null
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux width=1280 height=720 name=m batch-size=1 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch_size=1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvdsosd process-mode=1 ! nvegltransform ! nveglglessink sync=0 2> /dev/null
Expected Result: Test video decoding and TensorRT object detection with output rendered to the display should be successful without any corruption or noise.
MM Samples
Before you begin, ensure that MM API samples are available on the device.
Checking the Compilation and Running of the video_convert App
Run the following command.
$ cd /usr/src/jetson_multimedia_api/samples/07_video_convert $ sudo make
Run the following command.
$ sudo ./video_convert <in-file> <in-width> <in-height> <in-format> <out-file-prefix> <out-width> <out-height> <out-format> [OPTIONS]
For example,
sudo ./video_convert ../../data/Picture/nvidia-logo.yuv 1920 1080 YUV420 test.yuv 1920 1080 YUYV
Note
The video_convert sample consumes a YUV file. If you do not have a YUV file, use the jpeg_decode sample to generate one. For example, run the following command:
$ cd jetson_multimedia_api/samples/06_jpeg_decode/
$ sudo ./jpeg_decode num_files 1 ../../data/Picture/nvidia-logo.jpg ../../data/Picture/nvidia-logo.yuv
Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/jetson/archives/r36.4/ApiReference/l4t_mm_07_video_convert.html for more information.
Check the Compilation and Run the Backend App
Run the following command.
$ cd /usr/src/jetson_multimedia_api/samples/backend $ sudo make
Run the following command.
$ sudo ./backend 1 ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 \ --trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt \ --trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel \ --trt-mode 0 --trt-proc-interval 1 -fps 10
Expected Result: No error/failure should be observed on compilation and specific executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/jetson/archives/r36.4/ApiReference/l4t_mm_backend.html for more information.
Check the Compilation and Run the video_encode App
Run the following command.
$ cd /usr/src/jetson_multimedia_api/samples/01_video_encode $ sudo make
Run the following command.
$ sudo video_encode <in-file> <in-width> <in-height> <encoder-type> <out-file> [OPTIONS]
For example,
sudo ./video_encode ../../data/Video/sample_outdoor_car_1080p_10fps.yuv 1920 1080 H264 sample_outdoor_car_1080p_10fps.h264
.
Expected Result: No error/failure should be observed on compilation and specific executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/jetson/archives/r36.4/ApiReference/l4t_mm_01_video_encode.html for more information.
Check the Compilation and Run the video_decode App
Run the following command.
$ cd /usr/src/jetson_multimedia_api/samples/00_video_decode $ sudo make
Run the following command.
$ sudo ./video_decode <in-format> [options] <in-file>
For example,
./video_decode H264 ../../data/Video/sample_outdoor_car_1080p_10fps.h264
Expected Result: No error/failure should be observed on compilation and specific executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/jetson/archives/r36.4/ApiReference/l4t_mm_00_video_decode.html for more information.
Complete Pipeline: Inferencing
[Jetson] Classifying Images with ImageNet (googlenet,caffe)
Flash the device with the test image.
Install the JetPack components.
Build the project on the device from the source (https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md).
The repository for TensorRT-accelerated deep learning networks for image recognition, object detection with localization (for example, bounding boxes), and semantic segmentation will be downloaded. Various pre-trained DNN models are automatically downloaded.
$ sudo apt-get update $ sudo apt-get install git cmake libpython3-dev python3-numpy $ git clone --recursive https://github.com/dusty-nv/jetson-inference $ cd jetson-inference $ mkdir build $ cd build $ cmake ../ $ make $ sudo make install $ sudo ldconfig
Refer to https://github.com/dusty-nv/jetson-inference/blob/master/docs/jetpack-setup-2.md for more information.
$ cd jetson-inference/build/aarch64/bin $ sudo python3.6 ./imagenet-console.py --network=googlenet images/orange_0.jpg output_0.jpg # --network flag is optional, default is googlenet
Note
The first time you run each model, TensorRT will take a few minutes to optimize the network. The optimized network file is cached to disk, so future runs using the model will load faster.
Expected Result: The installation should complete without any issues, and inferencing should give us expected output. For example, the image is recognized as orange
(class #950) with 97.900391% confidence. Refer to https://github.com/dusty-nv/jetson-inference for more information.
[Jetson] Running the Live Camera Recognition Demo with ImageNet (googlenet,caffe)
Before you begin, ensure that the Ubuntu Desktop with the graphical desktop packages is installed.
Flash device with the test image.
Connect the camera to the device.
Install the JetPack components.
Build the project on device from the source (refer to https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md for more information).
The repository for TensorRT-accelerated deep learning networks for image recognition, object detection with localization (for example, bounding boxes), and semantic segmentation will be downloaded. Various pre-trained DNN models are automatically downloaded.
$ cd $HOME $ sudo apt-get update $ sudo apt-get install git cmake libpython3-dev python3-numpy $ git clone --recursive https://github.com/dusty-nv/jetson-inference $ cd jetson-inference $ mkdir build $ cd build $ cmake ../ $ make $ sudo make install $ sudo ldconfig
Refer to https://github.com/dusty-nv/jetson-inference/blob/master/docs/jetpack-setup-2.md for more information.
Navigate to
$HOME/jetson-inference/build/aarch64/bin#
.$ sudo python3.6 ./imagenet-camera --network=resnet-18 # using ResNet-18, default MIPI CSI camera (1280x720)
Interrupt the test.
Run for 5 minutes.
Expected Result: The installation should complete without any issues, and inferencing should give us expected output. In this case, it is:
class 0400 - 0.021591 (academic gown, academic robe, judge's robe)
class 0413 - 0.025543 (assault rifle, assault gun)
class 0526 - 0.023438 (desk)
class 0534 - 0.011513 (dishwasher, dish washer, dishwashing machine)
class 0592 - 0.027084 (hard disc, hard disk, fixed disk)
class 0667 - 0.238525 (mortarboard) To be tested
DeepStream Test Apps
Run the DeepStream Test Apps
To achieve best performance set the max clock settings
cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-decode-test deepstream-image-decode-app /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4 cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1 deepstream-test1-app ./dstest1_config.yml 2> /dev/null cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2 deepstream-test2-app ./dstest2_config.yml 2> /dev/null cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3 deepstream-test3-app ./dstest3_config.yml 2> /dev/null .. note:: Messages such as **WARNING: Deserialize engine failed because file path: <engine-name> open error** are expected for engines that are not present.
Test the Secondary gstreamer Inference Engine (SGIE)
Avoid dropping frames during playback.
sudo sed -i 's/sync=1/sync=0/' /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
To achieve the best performance, set the max clock settings.
sudo jetson_clocks
deepstream-app -c /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 2> /dev/null
Note
Messages such as WARNING: Deserialize engine failed because file path: <engine-name> open error are expected for engines that are not present.
Test 30 Streams Video Decode and TensorRT Object Detection with Output Rendered to the Display
To avoid dropping frames during playback, run the following command.
sudo sed -i 's/sync=1/sync=0/' /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt
To achieve best performance set the max clock settings, run the following command.
sudo jetson_clocks
deepstream-app -c /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt 2> /dev/null
The perf rate display should be approximately 21fps.
**PERF: 21.39 (21.16) 21.39 (21.09) 21.39 (21.09) 21.39 (21.09) 21.39 (21.09) 21.39 (21.05) 21.39 (21.09) 21.39 (21.09) 21.39 (21.16) 21.39 (21.09) 21.39 (21.05) 21.39 (21.10) 21.39 (21.16) 21.39 (21.09) 21.39 (21.09) 21.39 (21.09) 21.39 (21.09) 21.39 (21.16) 21.39 (21.16) 21.39 (21.16) 21.39 (21.16) 21.39 (21.09) 21.39 (21.09) 21.39 (21.09) 21.39 (21.16) 21.39 (21.09) 21.39 (21.16) 21.39 (21.16) 21.39 (21.09) 21.39 (21.09)
Clicking the left mouse button on a video stream zooms into that stream and clicking the right mouse button zooms back out again. The frame-rate should be greater than 20fps (as shown below) for Jetson AGX Xavier and around 13-14fps for Jetson Xavier NX. Messages such as WARNING: Deserialize engine failed because file path: <engine-name> open error are expected for engines that are not present.
Triton
Run the Sample DeepStream Triton Application
Install Triton.
cd /opt/nvidia/deepstream/deepstream/samples sudo ./prepare_ds_triton_model_repo.sh sudo apt -y install ffmpeg sudo ./prepare_classification_test_video.sh sudo ./triton_backend_setup.sh
Remove the GStreamer cache and verify that the nvinterserver is present.
rm ~/.cache/gstreamer-1.0/registry.aarch64.bin gst-inspect-1.0 nvinferserver
Run the sample DeepStream Triton application.
export DISPLAY=<local-display> cd /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 2> /dev/null
Expected Release: The DeepStream Triton application should run successfully.
Jetson AI Benchmarks
Run the Jetson AI Benchmarks
Run the following commands.
cd ${HOME} git clone https://github.com/NVIDIA-AI-IOT/jetson_benchmarks.git cd jetson_benchmarks mkdir models sudo sh install_requirements.sh
Ensure the fan service is running.
sudo systemctl restart nvfancontrol.service python3 utils/download_models.py --all --csv_file_path ./benchmark_csv/orin-benchmarks.csv --save_dir ${HOME}/jetson_benchmarks/models
Set Orin to the maximum power mode and reboot the device when prompted.
sudo nvpmodel -m 0 cd ${HOME}/jetson_benchmarks sudo python3 benchmark.py --all --csv_file_path ./benchmark_csv/orin-benchmarks.csv --model_dir ${HOME}/jetson_benchmarks/models --jetson_clocks
Expected Result: The reference TensorRT performance numbers for Jetson AGX Orin are published by NVIDIA.
VPI
Before you begin:
VPI should be installed on device and sample application should be available on device rootfs.
ubuntu@tegra-ubuntu:~$ dpkg -l | grep -i vpi ii libnvvpi2 2.4.1 arm64 NVIDIA Vision Programming Interface library ii vpi2-demos 2.4.1 arm64 NVIDIA VPI GUI demo applications ii vpi2-dev 2.4.1 arm64 NVIDIA VPI C/C++ development library and headers ii vpi2-samples 2.4.1 arm64 NVIDIA VPI command-line sample applications
Provide sudo/root permissions to users and compile the sample application or log in with root.
Run the 2D Image Convolution sample.
cd /opt/nvidia/vpi3/samples/01-convolve_2d
Build the C++ sample.
sudo cmake sudo make.
Run the C++ sample.
sudo ./vpi_sample_01_convolve_2d cpu ../assets/kodim08.png
View the output (refer to https://docs.nvidia.com/vpi/sample_conv2d.html).
eog edges_cpu.png
Run the Python sample.
sudo python3 main.py cpu ../assets/kodim08.png
View the output (refer to https://docs.nvidia.com/vpi/sample_conv2d.html).
eog edges_python3_cpu.png
Expected Result: The 2D Image Concolution sample should run successfully. Refer to https://docs.nvidia.com/vpi/sample_conv2d.html for more information.
Running the Stereo Disparity Sample
Run the following command.
cd /opt/nvidia/vpi3/samples/02-stereo_disparity
Build the C++ sample.
sudo cmake sudo make
Run the C++ sample.
sudo ./vpi_sample_02_stereo_disparity cuda ../assets/chair_stereo_left.png ../assets/chair_stereo_right.png
View the outputs (refer to https://docs.nvidia.com/vpi/sample_stereo.html).
eog confidence_cuda.png eog disparity_cuda.png
Run the Python sample.
sudo python3 main.py cuda ../assets/chair_stereo_left.png ../assets/chair_stereo_right.png
View the output (refer to https://docs.nvidia.com/vpi/sample_stereo.html).
eog confidence_python3_cuda.png eog disparity_python3_cuda.png
Expected Result: The Stereo Disparity sample should run successfully. Refer to https://docs.nvidia.com/vpi/sample_stereo.html for more information.
Run the Harris Corners Detector Sample that Uses the PVA
Run the following command.
cd /opt/nvidia/vpi3/samples/03-harris_corners
Build the C++ sample.
sudo cmake sudo make
Run the C++ sample.
sudo ./vpi_sample_03_harris_corners pva ../assets/kodim08.png
View the output (refer to https://docs.nvidia.com/vpi/sample_harris_detector.html).
eog harris_corners_pva.png
Run the python sample.
sudo python3 main.py pva ../assets/kodim08.png
View the output (refer to https://docs.nvidia.com/vpi/sample_harris_detector.html).
eog harris_corners_python3_pva.png
Expected Result: The Harris Corners Detector sample that is using the PVA should run successfully.
Run the KLT Bounding Box Tracker
Run the following command.
cd /opt/nvidia/vpi3/samples/06-klt_tracker
Build the C++ sample.
sudo cmake . sudo make
Run the C++ sample.
sudo ./vpi_sample_06_klt_tracker cuda ../assets/dashcam.mp4 ../assets/dashcam_bboxes.txt
Play the output video (refer to https://docs.nvidia.com/vpi/sample_klt_tracker.html).
Run the python sample.
sudo python3 main.py cuda ../assets/dashcam.mp4 ../assets/dashcam_bboxes.txt
Play the output video (refer to https://docs.nvidia.com/vpi/sample_klt_tracker.html).
Expected Result: The KLT Bounding Box Tracker should be successful. Refer to https://docs.nvidia.com/vpi/sample_harris_detector.html for more information.
Run the Temporal Noise Reduction
Run the following command.
cd /opt/nvidia/vpi3/samples/09-tnr
Build the C++ sample:
sudo cmake sudo make
Run the C++ sample.
sudo ./vpi_sample_09_tnr cuda ../assets/noisy.mp4
Play the output video (refer to https://docs.nvidia.com/vpi/sample_tnr.html).
Run the python sample.
sudo python3 main.py cuda ../assets/noisy.mp4
Play the output video (refer to https://docs.nvidia.com/vpi/sample_tnr.html).
Expected Result: Running the Temporal Noise Reduction should be successful. Refer to https://docs.nvidia.com/vpi/sample_tnr.html for more information.
Run the Perspective Warp
Run the following command.
cd /opt/nvidia/vpi3/samples/10-perspwarp
Build the C++ sample.
sudo cmake sudo make
Run the C++ sample.
sudo ./vpi_sample_10_perspwarp cuda ../assets/noisy.mp4
Play the output video (refer to https://docs.nvidia.com/vpi/sample_perspwarp.html).
Run the python sample.
sudo python3 main.py cuda ../assets/noisy.mp4
Play the output video (refer to https://docs.nvidia.com/vpi/sample_perspwarp.html).
Expected Result: The Perspective Warp should be successful. Refer to https://docs.nvidia.com/vpi/sample_background_subtractor.html for more information.
Run the Background Subtractor
Run the following command.
cd /opt/nvidia/vpi3/samples/14-background_subtractor
Build the C++ sample.
sudo cmake sudo make
Run the C++ sample.
sudo ./vpi_sample_14_background_subtractor cpu ../assets/pedestrians.mp4
Play the output video (refer to https://docs.nvidia.com/vpi/sample_background_subtractor.html).
Run the python sample.
sudo python3 main.py cpu ../assets/pedestrians.mp4
Play the output video (refer to https://docs.nvidia.com/vpi/sample_background_subtractor.html).
Expected Result: Background Subtractor should be successful. Refer to https://docs.nvidia.com/vpi/sample_background_subtractor.html for more information.