Test Plan and Validation

Flash and Boot

This section provides information about the Flash and Boot test cases.

Check the Flash and Boot Using the flash.sh Script

To flash the target device:

  1. Place the target device into recovery mode.

    1. Power on the carrier board and hold the Recovery button.

    2. Press the RESET button.

Note

You can also run the topo command to place device in recovery mode as mentioned in step 3. The topo command is supported only in Jetson Orin.

If you did not place the target device in recovery mode manually, run the topo command.

  1. To download the customer release build from the server, run the following command:

  2. (Optional) Start the serial console to check boot logs.

  3. Flash the device.

Expected Result: The device will boot correctly without a crash or kernel panic. Refer to FlashingSupport for more information.

Check the Flash and Boot SDKM

Note

The following steps mention flashing of NVIDIA shared image on Jetson Platform. If you have a custom BSP, flash it first and then use SDKM only to install additional SDK packages.

  1. Go to https://developer.nvidia.com/nvidia-sdk-manager and download sdkmanager.

    • To install sdkmanager on your Ubuntu installation, run the following command.

    • To install sdk manager using .deb on your system run the following command.

  2. In a terminal window, to start SDK Manager, run the sdkmanager command.

  3. In the SDK Manager launch screen, select the appropriate login tab for your account type: - NVIDIA Developer (developer.nvidia.com) - NVONLINE (partners.nvidia.com)

  4. To connect serial console, run the sudo minicom -c on -w -D /dev/ttyACM0 command.

    When you launch sdkmanager in step 1, your connected device should be identified/listed, for example, jetson AGX Orin.

  5. To the target device into reset/recovery mode, complete the following tasks:

    1. Make sure device is power ON, and if it is not, press the power button.

    2. Press and hold the Recovery(RCM) button.

    3. Press the Reset button.

    4. Release the Recovery(RCM)* button

  6. Select Host Machine, Target Hardware(Jetson AGX Orin), or the latest available version of JetPack.

  7. Select Packages to install.

  8. Select ALL the Product Category to be used to validate other feature use cases) continue to step 3.

  9. Select Manual Setup - Jetson Agx Orin, Emmc as the storage device, and the user name/password to be set for the flashed image.

10.To debug a failure in the terminal tab in SDK manager, complete the steps in the SDK Manager User Guide<https://docs.nvidia.com/sdk-manager/index.html>_.

  1. Select install.

  2. To verify the flash, log in to the device.

  3. Verify that the installation should completed successfully.

Expected Result: The device boot should complete without any crash or kernel panic, and all JetPack components should be installed properly on boot. Refer to https://docs.nvidia.com/sdk-manager/install-with-sdkm-jetson/index.html for more information.

NVME Boot

Before you begin, ensure that you have an NVME with the minimum of 16GB.

To flash the target device:

  1. To place the target device in recovery mode, complete the following tasks:

    1. Power on the carrier board and keep the Recovery button pressed.

    2. Press the Reset button.

Note

You can also use the topo command to place the device in recovery mode.

If you have not yet manually placed the target device in recovery mode, run the topo command.

  1. Download the customer release build from server.

  2. (Optional) To check boot logs, start the serial console.

  3. Flash the device.

  4. Complete the set up, for example, typing the user name, the location, the timezone, and so on.

  5. Log in to system.

NFS Boot

  1. install the packages for NFS.

  2. Complete the following steps to flash the target device:

    1. Place the target device into recovery mode.

    2. Power on the carrier board and press the Recovery button.

    3. Press the Reset button.

    Note

    You can also use the topo command to place device in recovery mode.

  3. Download the customer release build from the server.

    If you have not yet manually placed the target device in recovery mode, run the topo command.

  4. (Optional) To check boot logs, start the serial console.

  5. Add the rootfs path to the /etc/exports directory.

  6. Restart the NFS service.

    Refer to To RCM boot to NFS for more information.

  7. Just to make sure your NFS share is visible to the client, run the following command on the NFS server.

  8. Run the rcm boot command.

    For example, sudo ./flash.sh -N 10.24.212.249:$HOME/generic_release_aarch64/Linux_for_Tegra/Linux_for_Tegra/rootfs --rcm-boot jetson-agx-orin-devkit eth0

  9. Complete the set up.

  10. Log in to system.

Expected Result: The device should boot properly without a crash or kernel panic. Refer to https://forums.developer.nvidia.com/t/how-to-boot-from-nfs-on-xavier-nx/142505 for more information.

System Software

Detection of USB Hub (FS/HS/SS)

  1. Boot the target.

  2. Connect the USB Hub (FS/HS/SS) to one USB port in the target device.

  3. Check on serial terminal.

    Serial terminal will show you model and speed of the hub. You can also check the demsg logs for more information.

  4. Connect the hub to the second port and check.

Expected Result: The USB hub should be enumerated on all USB ports of the target.

Detection of USB-3.0 Flashdrive (Hot plug-in)

  1. Boot the target.

  2. Connect the USB3 pendrive to one USB port of the target.

  3. Using a serial terminal, check whether the flash drive has enumerated by running the lsusb command.

    You can also check whether the drive is listed in the file browser and check the demsg logs for more information.

  4. Copy the files to pendrive.

  5. Complete steps 1-4 on all USB ports on the target.

Expected Result: The USB3 Pendrive should be detected on all of the USB ports of the target.

Note

Not all ports are USB 3.0 capable, so your device might operate at lower speeds. Review the relevant developer kit documentation to determine whether USB 3.0-capable ports.

Detecting the Keyboard

  1. Boot the target.

  2. Connect keyboard to the device.

  3. Using a serial terminal, check whether the keyboard has enumerated by running the serial terminal command.

  4. Press any key on keyboard to verify the functionality.

Expected Result: The keyboard should be successfully detected and functional.

Detecting the Mouse

  1. Boot the target.

  2. Connect the USB mouse to the device.

  3. Using a serial terminal, check whether the mouse has enumerated.

  4. Use the connected mouse to verify the functionality.

Expected Result: The mouse should be successfully detected and functional.

Cold Boot 10 Cycles

  1. Boot the target.

  2. Log in to the device.

  3. Run the power down command, for example:

  4. Press the power button.

Expected Result: The device should reboot successfully for each iterations without any kernel panic, failures, and errors.

Warm Boot 10 Cycles

  1. Boot the target.

  2. Log in to the device.

  3. Run the sudo systemctl reboot command to warm boot the device.

Expected Result: The device should reboot successfully for each iterations without any kernel panic, failures, and errors.

Checking for Failures and Error Messages in dmesg

  1. Boot the target.

  2. Check the dmesg logs for failures and errors.

Expected Result: There should be no error and failures messages in the dmesg log.

Using the Display Port (DP)

  1. Boot the target with the Display Port (DP) connected (1080P/4K).

  2. Ensure that you see boot logo, framebuffer, and the desktop on the connected DP.

Expected Result: There should be no corruption on the display.

LP0-Resume Basic Functionality

Note

This check needs to be completed five times.

  1. Boot the target.

  2. Ensure that you can access the serial terminal, for example, ttyUSB0.

  3. On the serial terminal, run the sudo systemctl suspend command.

Expected Result: The device suspend prints are displayed on the serial terminal and that the display is off.

You can wake up the device by using the connected USB keyboard.

UART I/O (Debug Print Messages)

  1. Boot the target.

  2. Ensure that you can access the serial terminal, for example, ttyUSB0.

Configuring the 40-Pin Expansion Header

  1. Start Jetson-IO, and on the developer kit, run the following command:

  2. The options to configure the I/O are displayed.

  3. Complete the steps in https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide.

  4. Save and reboot.

  5. After configuring the pins, reboot the device.

  6. Start the jetson-io.oy and check whether the pins have been configured for the selected option.

    Refer to ConfiguringTheJetsonExpansionHeaders for more information.

Jetson-Linux WatchDog Timer Enable Verification

  1. Boot the target.

  2. On the serial console, to crash the system, run the following command.

  3. Check WatchDog Timer reboots device after the timeout, for example, 180 seconds.

Expected Result: The device should be rebooted by the WatchDog Timer without errors and failures.

Verifying the Boot Time Services and Scripts

  1. Verify that all boot time scripts are correctly initialized.

  2. To check that there are no failures because of the scripts, run the following command.

  3. To verify that the services/scripts start without failure/errors, run the following command.

  4. Check serial logs during the device boot-up.

Expected Result: There should not be any errors.

nvpmodel: Setting and Verifying the Minimum and Maximum Power Modes

Note

Nvpmodel introduces different power modes on the Jetson platforms. The mode that is selected determines which CPU cores are used and the maximum frequency of the CPU and GPU.

  1. To check the current power mode, boot the device and run the sudo nvpmodel -q command.

  2. To set minimum power mode run the 'sudo nvpmodel -m x command.

    In this command, x is min power mode.

  3. To set the maximum supported power, run the sudo nvpmodel -m 0 command.

  4. To check the maximum and minimum clocks for each power mode, run the sudo nvpmodel -q --verbose command.

  5. Here are the power modes:

    • sudo nvpmodel -m 0 — MAXN

    • sudo nvpmodel -m 1 — MODE_15W

    • sudo nvpmodel -m 2 — MODE_30W

    • sudo nvpmodel -m 3 — MODE_50W

Detailed specifications about power modes are in the /etc/nvpmodel.conf the directory.

Expected Result: You should be able to set supported power model. The CPU/GPU frequency should be changed based on the selected power mode Refer to Platform Power and Performance for more information.

Running the tegrastats Command

  1. Run the tegrastats command.

Expected Result: The specification values for active CPUs and GPUs should be reflected in the output:

07-11-2023 07:10:08 RAM 4601/63895MB (lfb 3x64MB) CPU [0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729] GR3D_FREQ 0% cv0@0C cpu@45.062C tboard@33.25C soc2@41.062C tdio
de@38.625C soc0@41.656C cv1@0C gpu@0C tj@45.062C soc1@40.781C cv2@0C VDD_GPU_SOC 3065mW/3065mW VDD_CPU_CV 766mW/766mW VIN_SYS_5V0 3623mW/3623mW NC 0mW/0mW VDDQ_VDD2_1V8AO 503mW/503mW NC 0mW/0mW

Graphics

Running a Graphics Application Using X11

  1. Connect the display to the target and boot the target.

  2. If GDM is enabled, log in to the device via the Desktop interface. This step is also necessary to run X11 graphics binaries over SSH. If desired, automatic log in can be enabled by editing /etc/gdm3/custom.conf and adding AutomaticLoginEnable=True and AutomaticLogin=<username> under the [daemon] section.

  3. Export the DISPLAY variable, for example export DISPLAY=:0.

  4. To check for various display modes that are supported by the display, run the following command:

    $ xrandr
    
  5. Run the glxgears sample application by executing the following command. Verify that the frames per second matches the display information in the previous step.

    $ glxgears -fullscreen
    
  6. Run the bubble graphics application by executing the following command.

    $ /usr/src/nvidia/graphics_demos/prebuilts/bin/x11/bubble
    

Expected Result: The graphics applications should render successfully on the display, and no corruption or hang should be observed while rendering. Refer to Graphics for more information.

Running a Graphics Application Using EGLdevice (DRM)

  1. Connect the display to the target and boot the target.

  2. Stop GDM and if X is running on the target, kill it.

    $ sudo service gdm stop (Stop GDM)
    $ sudo pkill x
    
  3. Load the NVIDIA drm module.

    • For Jetson AGX Orin:

      $ sudo modprobe nvidia_drm modeset=1
      
  4. Run the bubble graphics application by executing the following command.

Expected Result: The graphics application should render successfully on the display. There should be no corruption or hang while rendering.

Running a Graphics Application Using Wayland

  1. Connect the display to the target and boot it after flashing.

  2. If X is running on the target, kill it.

    $ sudo service gdm stop (Stop GDM)
    $ sudo pkill x3. Launch wayland using below commands.
    $ unset DISPLAY
    
  3. Run the following commands.

    $ export WESTON_TTY=1
    $ sudo XDG_RUNTIME_DIR=/tmp/xdg weston --tty=$WESTON_TTY --idle-time=0 &
    
  4. Press Enter.

  5. Run the bubble graphics application by executing the following command.

    $ sudo XDG_RUNTIME_DIR=/tmp/xdg /usr/src/nvidia/graphics_demos/prebuilts/bin/wayland/bubble
    

Expected Result: The graphics binary should render successfully on the display. No corruption or hang should be observed while rendering.

Refer to Graphics for more information.

Kernel

Checking the Kernel Version

  1. Boot the device.

  2. To determine the kernel version number, run the uname -r command.

Expected Result: The Kernel version should display accordingly, for example, 5.16.0-tegra-g44acfbed970e.

Verifying Unloading of Kernel Modules Using modprobe

  1. Log in to the device.

  2. To locate the loaded, active module, run the lsmod command.

    The output will list the active, and other dependent, modules (for example, x_tables 49152 5 xt_conntrack,iptable_filter,xt_addrtype,ip_tables,xt_MASQUERADE).

  3. To remove this module, run the ‘sudo modprobe -r x_tables` command.

    The following error message will display:

    modprobe: FATAL: Module x_tables is in use

    This message is expected because the module is being used by other modules that appear in the lsmod output against x_tables. You have to remove all modules that depend on x_tables and then remove the x_tables.

  4. Remove modules that have no dependent modules.

    This step will not throw an error like rtl8822ce.

  5. Test the modules without dependencies, where “used by” is 0.

  6. After the module is removed, removing it again will not print anything, for example, rtl8822ce , userspace_alert, so this outcome is expected.

Expected Result: Unloading of the module should happen without failure. No error/failure/warning/long delay should happen during, or after, the process, and the system should remain stable.

Verifying the Previous Hang log - last_kmsg/console-ramoops

To checks whether the console-ramoops, which are also known as the last_kmsg file, is being generated after a reboot when a system hang happens.

  1. Power off the device using the Power button or by running the using command sudo poweroff command.

  2. Manually power the device on again.

  3. After the system boots up, ensure that there is no system node (/sys/fs/pstore/console-ramoops).

  4. To complete a typical boot, run the $ sudo reboot command.

  5. After the system boots up, run the sudo cat /sys/fs/pstore/console-ramoops-[0] command and check whether logs are being dumped into the generated file.

  6. To trigger a kernel panic, run the $ sudo bash -c echo c > /proc/sysrq-trigger command.

  7. Reboot device manually.

  8. After system boots up, to check whether console_ramoops is generated, run the sudo cat /sys/fs/pstore/console-ramoops-[0] command.

    The output should show show the watchdog timeout kernel messages.

  9. Check for dmesg-ramoops-[0] logs for the dmesg logs.

    console-ramoops-x where x is the numeric value generated run time.

Expected Result: When a system hangs, a console_ramoops file is generated under /sys/fs/pstore with enough information about previous hang.

Check DVFS Scaling

This procedure allows you to check whether Dynamic Voltage/Frequency Scaling (DVFS) and EMC scaling are working.

Before you begin:

  • Ensure that jetson_clocks is not active. (You can just reboot the device).

  • Verify that the nvpmodel setting is Max-N.

    To verify this setting, run the sudo nvpmodel -m 0 command.

  1. Keep device idle for five minutes.

  2. Display and note down the CPU frequency values.

    cat /sys/devices/system/cpu/cpu[0-9]/cpufreq/cpuinfo_cur_freq
    
  3. Run CPU workload, such as the system benchmark, for example, SpecInt.

  4. Observe change in freq values.

Expected Result: The values should be reflected in scaling nodes. To check change in CPU/GPU frequencies, you can also run the tegrastats command instead of the frequency scaling nodes.

CPU-therm System Throttle Alert Check

NVIDIA is providing users UI notifications when CPU temperature reaches the trip point. The CPU hot/thermal throttle alert persistent toast message appears in the upper right corner and ! appears in the task bar.

  1. To raise the temperature of the device, run multiple apps/benchmark on the device for long time.

  2. When the CPU temperature reaches the trip point, the CPU thermal warning toast message will appear.

Expected Result: You should see Hot surface alert when CPU temperature reaches the trip point. You should also see throttle-alert cooling state alerts on serial console. Refer to ThermalSpecifcations for more information.

Camera

Before you begin:

  • Install the v4l2 utility on the device.

  • Set the sensor-id based on the output from v4l2-ctl --list-devices (for example, /dev/video<0/1/2> where 0/1/2 are the sensor-id identified by v4l2-ctl

Device: Test Image Capture with Camera Devices

  1. Start the argus camera app with every camera device capture image.

  2. View the captured image.

Expected Result: You should be able to start the camera and capture the image. Refer to AcceleratedGstreamer for more information.

Device: Test Video Capture with Camera Devices

  1. Start the argus camera app with every camera device capture video.

  2. View the captured video.

Expected Result: You should be able to start the camera and capture the image. Refer to AcceleratedGstreamer for more information.

  1. Turn on screen reader support.

To enable screen reader support, press Ctrl+Alt+Z. To learn about keyboard shortcuts, press Ctrl+slash for more information.

Verifying IMX274 Camera Sensor

  1. To connect the IMX274 dual camera module to the target, run one of the following commands:

    • nvargus_nvraw --sensorinfo --c <sensor-id1>

      For example, nvargus_nvraw --sensorinfo --c 0

    • nvargus_nvraw --sensorinfo --c <sensor-id2>

      For example, nvargus_nvraw --sensorinfo --c 1

  2. Verify that both sensors are detected.

Capturing a JPEG Image from Each Sensor

To capture a .jpeg image from each sensor, run one of the following commands:

  • nvargus_nvraw --c 0 --format jpg --file ${HOME}/frame-cam0

  • nvargus_nvraw --c 1 --format jpg --file ${HOME}/frame-cam1

Expected Result: You should be able to capture the image.

Comms

WiFi AP Connectivity with WPA2 Security

  1. Boot the device.

  2. Open the GUI Wi-Fi settings and connect to AP with WPA2 security.

Expected Result: You should be able to connect selected Wi-Fi AP.

MM Content (YouTube|(TM) 1080p) Streaming Over WiFi

  1. Connect to the Wi-Fi AP through the GUI or the command-line interface (CLI).

  2. Ensure that the ethernet is disconnected.

  3. Start the Chrome (TM) browser on the target.

  4. Play any 1080P video on YouTube.

Expected Result: This test checks the Wi-Fi connectivity, and YouTube video playback should happen.

Setting up WiFi AP Connection over the Command-Line Interface

  1. Flash the build that has no GUI installed.

    If you have ubuntu-desktop, disable WiFi using WiFi settings after the boot is complete.

  2. Boot the device and connect it to an AP using the command-line interface (CLI).

    $ ifconfig -a
    

    Note

    If the WiFi is soft/hard blocked by the rfkill run command, run sudo rfkill unblock all.

  3. Identify WiFi interface and configure it.

    $ iwconfig
    $ sudo ifconfig wlan0 up
    
    $ sudo iwlist wlan0 scan | grep ESSID
    $ sudo apt install wpasupplicant
    $ wpa_passphrase YOUR_AP_NAME PASSWORD | sudo tee /etc/wpa_supplicant.conf
    $ sudo wpa_supplicant -c /etc/wpa_supplicant.conf -i wlan0
    8. Open another terminal: $ sudo systemctl stop NetworkManager
    $ iwconfig
    $ ifconfig wlan0
    $ sudo ifconfig wlan0 up
    $ sudo dhclient wlan0
    $ ifconfig wlan0
    $ ifconfig -a
    
  4. Verify network connectivity.

    $ ping -I wlan0 8.8.8.8 (you can test your local IP)
    

Expected Result: The Wi-Fi should be turn on and be connected to AP, and the connection should be consistent and free from any drops.

Bluetooth Pairing and Unpairing

  1. Boot the device.

  2. Open the Bluetooth GUI settings.

  3. Check nearby Bluetooth devices.

  4. Pair the selected device, for example, the Bluetooth Keyboard.

  5. To disconnect device, double click on connected device, and turn off the connection.

  6. To permanently remove the device, click Remove disconnected device .

Expected Result: You should be able to pair selected device, for example the Bluetooth keyboard, and ensure that the connected device works properly.

Ethernet LAN Connectivity

  1. Boot the device with the ethernet cable connected to the device’s ethernet port.

  2. In a terminal window, ping 8.8.8.8. For example, if the ethernet interface is eth0, then execute the following command.

    $ ping -I eth0 8.8.8.8
    

Expected Result: You should be able to ping www.google.com without any packet loss.

Ethernet LAN Hot-plug

  1. Boot the device with the ethernet cable to the device’s ethernet port.

  2. Disconnect and reconnect the ethernet cable.

  3. In a terminal window, ping 8.8.8.8. For example, if the ethernet interface is eth0, then execute the following command.

    $ ping -I eth0 8.8.8.8
    

Expected Result: You should be able to ping www.google.com without any packet loss.

Ethernet LAN Bandwidth

  1. Boot the device with the ethernet cable connected to the device’s ethernet port.

  2. Check the device’s ethernet IP address. For example, if the ethernet interface is eth0, then execute the following command to see the IP address.

    $ ip addr show eth0
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 48:b0:2d:78:83:46 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.180/24 brd 192.168.1.255 scope global dynamic eth0
           valid_lft 77658sec preferred_lft 77658sec
        inet6 fe80::cbba:a3a8:ccf5:d0e8/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    
  3. Check the ethernet line speed. For example, if the ethernet interface is eth0, then execute the following command to check the line speed. Note the speed is reported in Mbps.

    $ cat /sys/class/net/eth0/speed
    1000
    
  4. Install iperf3 on the target device and a host machine on the same network. To install iperf3 on the target device, execute the following commands.

    $ sudo apt update
    $ sudo apt install iperf3
    
  5. Start iperf3 server on the target.

    $ iperf3 -s
    
  6. Start iperf3 client on the host machine.

    $ iperf3 -c <target-ip-address> -P8 -t 60
    

Expected Result: The bandwidth reported by iperf3 should be close to the line speed of the ethernet connection.

Multimedia Encode/Decode

Before you begin:

  1. nvidia-l4t-gstreamer must be installed to run GStreamer pipelines. Refer to SoftwarePackagesAndTheUpdateMechanism for more information.

  2. Log in to the device and open a terminal window.

  3. Verify that the MM sample files are available on the device.

  4. If required, the 4K display should be connected by the test case.

  5. Ensure that your HDMI TV is connected, and the X server and the Ubuntu desktop are running.

Camera capture using GStreamer

For enabled ISP processing for CSI cameras or Bayer captures, use the nvarguscamerasrc GStreamer plugin.

Before you begin, ensure the camera is connected and is working. Refer Camera for more information.

Note

Set the sensor-id based on the output by running the v4l2-ctl --list-devices command.

Capture an image

  1. Run the following command.

    $ gst-launch-1.0 nvarguscamerasrc num-buffers=1 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12' ! nvjpegenc ! filesink location=${HOME}/gst-frame-cam0.jpg
    $ gst-launch-1.0 nvarguscamerasrc num-buffers=1 sensor-id=1 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12' ! nvjpegenc ! filesink location=${HOME}/gst-frame-cam1.jpg
    

Expected Result: You should be able to capture the image.

Capturing a Motion-JPEG Stream

  1. Run the following command.

    $ gst-launch-1.0 nvarguscamerasrc num-buffers=300 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvjpegenc !  avimux ! filesink location=${HOME}/mjpeg.avi -e
    

Expected Result: The motion-JPEG stream should get captured without crashes or errors.

Preview the Camera Stream

  1. Run the following command.

    $ gst-launch-1.0 nvarguscamerasrc num-buffers=300 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvegltransform ! nveglglessink sync=0
    

Expected Result: You should be able to stream without crashes or errors.

Capturing Video from the Camera and Record

  1. Run the following command.

    $ gst-launch-1.0 nvarguscamerasrc num-buffers=300 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h265enc bitrate=8000000 ! h265parse ! qtmux ! filesink location=<filename_h264>.mp4
    

Expected Result: You should be able to playback the stream without crashes and errors.

Encode using GStreamer

  1. Run one of the following commands.

    $ gst-launch-1.0 nvarguscamerasrc num-buffers=300 sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=<filename_h265>.mp4
    $ gst-launch-1.0 videotestsrc num-buffers=300 ! 'video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=<filename_h264>.mp4 -e
    $ gst-launch-1.0 filesrc location=<filename_1080.yuv>! videoparse width=1920 height=1080 format=2 framerate=30 ! 'video/x-raw, format=(string)I420' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvv4l2av1enc ! matroskamux ! filesink location=<filename_av1>.mkv -e
    

Expected Result: The encoded stream should be correct, and there should be no corruption the stream.

Decode using GStreamer

  1. Run one of the following commands.

    $ gst-launch-1.0 filesrc location=<filename_h264>.mp4 ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nv3dsink -e
    $ gst-launch-1.0 filesrc location=<filename_h265>.mp4 ! qtdemux ! queue ! h265parse ! nvv4l2decoder ! nv3dsink -e
    $ gst-launch-1.0 filesrc location=<filename_av1>.webm ! matroskademux ! queue ! nvv4l2decoder ! nv3dsink -e
    

Expected Result: There should be no corruption or buffer drops during playback.

JPEG Decode using GStreamer

You can complete this task in one of the following ways:

  • Using nv3dsink

    $ gst-launch-1.0 -v filesrc location=<JPEG_IMAGE_LOCATION><IMAGE_NAME>.jpg ! jpegparse ! nvjpegdec ! nvvidconv ! 'video/x-raw(memory:NVMM), format=RGBA' ! nv3dsink
    
  • Using nveglglessink

    $ gst-launch-1.0 -v filesrc location=<JPEG_IMAGE_LOCATION><IMAGE_NAME>.jpg ! jpegparse ! nvjpegdec ! nvegltransform ! nveglglessink
    

Expected Result: The JPEG decoding should be correct, and there should be no corruption with the decoded image.

JPEG Encode using GStreamer

  1. Run the following command.

    $ gst-launch-1.0 videotestsrc num-buffers=1 ! 'video/x-raw, width=1920, height=1080, format=(string)I420' ! nvvidconv ! 'video/x-raw(memory:NVMM)' ! nvjpegenc ! filesink location=${HOME}/frame.jpg
    

Expected Result: JPEG encode should be be correct, and there should be no corruption with the decoded image.

Transform using GStreamer

nvvidconv can be used to perform video format conversion, scaling, and cropping operations. Refer to AcceleratedGstreamer for more information.

  1. Run one of the following commands.

    $ gst-launch-1.0 videotestsrc num-buffers=100 ! 'video/x-raw, format=(string)UYVY, width=(int)1280, height=(int)720' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=<test>.mp4
    $ gst-launch-1.0 videotestsrc num-buffers=100 ! 'video/x-raw, format=(string)I420, width=(int)1280, height=(int)720' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420' ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=<test>.mp4
    

Expected Result: The encoded files should be correct with no corruption.

Transcode using GStreamer

  1. Run one of the following commands.

    $ gst-launch-1.0 filesrc location=<filename_1080p_h264.mp4> ! qtdemux ! h264parse ! nvv4l2decoder ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=<Transcoded_filename>.mp4 -e
    $ gst-launch-1.0 filesrc location=<filename_1080p_h265.mp4> ! qtdemux ! h265parse ! nvv4l2decoder ! nvv4l2av1enc ! matroskamux ! filesink location=<Transcoded_filename>.mkv -e
    

Expected Result: The encoded file stream should run correctly with no corruption.

Video Playback using Application

This procedure verifies that the playback of a 4K video with H265 codec is successful using nvgstplayer-1.0.

  1. Run the following command.

    $ nvgstplayer-1.0 -i <H265_FILE_NAME>.webm --stats
    

Expected Result: Video playback should be smooth without any corruption and dropped frames.

MP3 Playback Test to Verify that MP3 Playback is Successful Using nvgstplayer-1.0

  1. Run the following command.

    $ nvgstplayer-1.0 -i MP3_file.mp3 --stats
    

Expected Result: The MP3 playback should run correctly with no glitches or corruption.

MP3 Streaming (Stream the MP3 file from an HTTP Server)

Before you begin, ensure that the display is connected and the X server and the Ubuntu desktop are running.

  1. Ensure that the host MP3 file is on the HTTP server for streaming.

  2. Log in to the device and open a terminal window.

  3. Navigate to the directory where MP3 file is located and run the following command.

    $ python3 -m http.server 8001 &
    
  4. Download or copy the MP3 image to the device.

  5. Run the following command.

    $ nvgstplayer-1.0 -i http://<IP_ADDR_OF_DEVICE>:8001/MP3_file.mp3 --stats
    

Expected Result: The Audio streaming should be with out noise and breaks, and there should be no hangs or crashed while streaming.

Streaming Audio and Video File from the HTTP Server

Before you begin, ensure that the display is connected and the X server and the Ubuntu desktop are running.

  1. Ensure that the host MP3 file is on the HTTP server for streaming.

  2. Log in to the device and open a terminal window.

  3. Navigate to the directory where MP3 file is located and run the following command.

    $ python3 -m http.server 8001 &
    
  4. Download or copy the audio and video image to the device.

  5. Run the following command.

    $ nvgstplayer-1.0 -i  http://<IP_ADDR_OF_DEVICE>:8001/<VP9_FILE_NAME>.webm --stats
    

Expected Result: Video streaming should be with out distortion or issues, and there should be no hang or crash while streaming.

AUDIO+VIDEO rtsp streaming:H264+AAC

This procedure allows you to stream the clip link from an rtsp server.

  1. Open the browser.

  2. Stream the content from the rtsp server.

  3. Open the link, for example, http://your_streaming_site.com/rtsp-server.html.

  4. Click the AUDIO+VIDEO test file and stream.

Expected Result: Video streaming should have no distortion or issues, and there should be no hang or crash while streaming.

Camera Argus Samples

Compiling Argus SDK Samples Compilation and Running _cudahistogram

  1. Run the following command.

    cd /usr/src/jetson_multimedia_api/argus
    
  2. Run the following command.

    sudo apt-get install cmake; sudo apt-get install build-essential; sudo apt-get install pkg-config; sudo apt-get install libx11-dev; sudo apt-get install libgtk-3-dev; sudo apt-get install libexpat1-dev; sudo apt-get install libjpeg-dev; sudo apt-get install libgstreamer1.0-dev
    
  3. Run the following command.

    sudo mkdir build
    
  4. Run the following command.

    cd build
    
  5. Run the following command.

    sudo cmake ..
    
  6. Run the following command.

    cd samples/cudaHistogram
    
  7. Run the following command.

    sudo make
    
  8. Run the following command.

    sudo make install
    

Expected Result: No failures should be observed during the compilation, and the sample binary should exist and run without issues.

Compiling Argus SDK Samples and Running _gstvideoencode

  1. Run the following command.

    cd /usr/src/jetson_multimedia_api/argus
    
  2. Run the following command.

    sudo apt-get install cmake; sudo apt-get install build-essential; sudo apt-get install pkg-config; sudo apt-get install libx11-dev; sudo apt-get install libgtk-3-dev; sudo apt-get install libexpat1-dev; sudo apt-get install libjpeg-dev; sudo apt-get install libgstreamer1.0-dev
    
  3. Run the following command.

    sudo mkdir build
    
  4. Run the following command.

    cd build
    
  5. Run the following command.

    sudo cmake ..
    
  6. Run the following command.

    cd samples/gstVideoEncode
    
  7. Run the following command.

    sudo make
    
  8. Run the following command.

    sudo make install
    

Expected Result: No failures should be observed during compilation, and the sample binary should exist and run without issues.

Argus SDK samples compilation and run _multisensor

Before you begin, ensure that two camera sensors are connected to the device.

  1. Run the following command.

    cd /usr/src/jetson_multimedia_api/argus
    
  2. Run the following command.

    sudo apt-get install cmake; sudo apt-get install build-essential; sudo apt-get install pkg-config; sudo apt-get install libx11-dev; sudo apt-get install libgtk-3-dev; sudo apt-get install libexpat1-dev; sudo apt-get install libjpeg-dev; sudo apt-get install libgstreamer1.0-dev
    
  3. Run the following command.

    sudo mkdir build
    
  4. Run the following command.

    cd build
    
  5. Run the following command.

    sudo cmake ..
    
  6. Run the following command.

    cd samples/multiSensor
    
  7. Run the following command.

    sudo make
    
  8. Run the following command.

    sudo make install
    

Expected Result: No failures should be observed during compilation, and the sample binary should exist and run without issues.

Web Camera capture using GStreamer

USB cameras, Bayer sensors, and YUV sensors output YUV images without ISP processing and do not use the NVIDIA camera software stack. As a result, the OSS GStreamer v4l2src plugin is used for streaming.

Before you begin:

  • Ensure that gst-launch is available on the device with all dependencies installed.

  • identify the /dev/videX interface for the USB web cam.

Capturing Video from a USB Web Camera and Recording the Video

  1. Capture the video from the USB web cam in the MP4 format.

    $ gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-raw, width=640, height=480, format=(string)YUY2, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvv4l2h264enc bitrate=8000000 ! h264parse ! qtmux ! filesink location=${HOME}/test.mp4 -e 2> /dev/null
    
  2. Decode and render the captured video.

    $ gst-launch-1.0 filesrc location=${HOME}/test.mp4 !  qtdemux ! h264parse ! nvv4l2decoder ! nvegltransform ! nveglglessink sync=0 2> /dev/null
    
  3. Capture a video from USB web cam in the mjpeg.avi format.

    $ gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-raw, width=640, height=480, format=(string)YUY2, framerate=(fraction)30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvjpegenc ! avimux ! filesink location=${HOME}/mjpeg.avi -e 2> /dev/null
    
  4. Decode and render the captured video.

    $ gst-launch-1.0 filesrc location=${HOME}/mjpeg.avi ! avidemux ! nvv4l2decoder mjpeg=true ! nvegltransform ! nveglglessink sync=0 2> /dev/null
    

Expected Result: Video Capture and Video Encode should be successful using the USB web cam.

Capturing and displaying the Video from a USB Web Camera

  1. Capture the video from the USB web cam and display it.

    gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-raw, width=640, height=480, format=(string)YUY2, framerate=(fraction)30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvegltransform ! nveglglessink sync=0 2> /dev/null
    

Expected Result: The captured Video should successfully display.

Capturing Videos from a USB web cam and Running it Through TRT

  1. Capture a video from the USB web cam and run it through TRT.

    gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-raw, width=640, height=480, format=(string)YUY2, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! m.sink_0 nvstreammux width=640 height=480 name=m batch-size=1 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch_size=1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvdsosd process-mode=1 ! nvegltransform ! nveglglessink sync=0 2> /dev/null
    

Expected Result: The captured video should successfully run through TRT.

NVIDIA Containers

Install Container Engine and NVIDIA Container Toolkit

  1. Install a supported container engine (Docker, Containerd, CRI-O, Podman) for your Linux distribution.

  2. Install the NVIDIA Container Toolkit: Refer to the instructions here.

  3. Configure container engine: Refer to the instructions here.

Run JetPack Container

  1. Pull the JetPack Container.

    # For docker
    sudo docker pull nvcr.io/nvidia/l4t-jetpack:r36.3.0
    
    # For podman
    podman pull nvcr.io/nvidia/l4t-jetpack:r36.3.0
    
  2. Run the JetPack Container.

    # For docker
    sudo docker run --rm -it \
     -e DISPLAY --net=host \
     --runtime nvidia \
     -v /tmp/.X11-unix/:/tmp/.X11-unix \
     -v ${HOME}/cuda-samples:/root/cuda-samples \
     nvcr.io/nvidia/l4t-jetpack:r36.3.0 /bin/bash
    
    # For podman
    podman run --rm -it \
     -e DISPLAY --net=host \
     --device nvidia.com/gpu=all \
     --group-add keep-groups \
     --security-opt label=disable \
     -v ${HOME}/cuda-samples:/root/cuda-samples \
     nvcr.io/nvidia/l4t-jetpack:r36.3.0 /bin/bash
    

CUDA Samples

Before you begin: Get the CUDA samples and set up DISPLAY.

  • Install git on the device.

  • Get the CUDA 12 samples:

    cd ${HOME}
    git clone -b v12.2 https://github.com/NVIDIA/cuda-samples.git
    
  • Log in to the display and check the display TTY using the command w.

    11:54:08 up 3 min,  2 users,  load average: 0.33, 0.14, 0.05
    USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
    ubuntu   ttyS0    -                11:50    0.00s  0.09s  0.04s w
    ubuntu   :1       :1               11:53   ?xdm?  39.26s  0.04s /usr/lib/gdm3/g
    
  • Set up the DISPLAY environment variable based on the output in step 1.

    export DISPLAY=:1
    xhost +local:
    

Build and Run CUDA Samples Natively on Target Device

Before you begin: Pre-condition

  • CUDA 12.2 should be installed on device with all dependencies.

  1. Build CUDA samples.

    sudo apt-get install libglfw3
    sudo apt-get install libglfw3-dev
    cd ${HOME}/cuda-samples
    make clean
    make -j$(nproc)
    

Expected Result: No error/failure should be observed during the compilation, and an executable binary file should appear after the compilation is complete. Refer to https://docs.nvidia.com/cuda/cuda-samples/index.html#getting-started-with-cuda-samples for more information.

  1. Go to the section Run CUDA Samples and run the given commands on the target.

Build and Run CUDA Samples in Container on Target Device

  1. Run the JetPack container using the instructions in the section Run JetPack Container.

  2. Build the CUDA samples within it.

    apt update && apt install -y libglfw3 libglfw3-dev libdrm-dev pkg-config cmake
    
    cd ${HOME}/cuda-samples
    make clean
    make -j$(nproc)
    

Expected Result: No error/failure should be observed during the compilation, and an executable binary file should appear after the compilation is complete. Refer to https://docs.nvidia.com/cuda/cuda-samples/index.html#getting-started-with-cuda-samples for more information.

  1. Go to the section Run CUDA Samples and run the given commands within the container.

Run CUDA Samples

The following instructions to run CUDA samples can be executed natively on the target or within the JetPack container.

  1. Run Bandwidth test sample applications.

    cd ${HOME}/cuda-samples/bin/aarch64/linux/release
    ./bandwidthTest
    

Expected Result: No error/failure should be observed, and the sample application should run successfully.

  1. Run the device query test sample applications.

    cd ${HOME}/cuda-samples/bin/aarch64/linux/release
    ./deviceQuery
    

Expected Result: No error/failure should be observed and the sample application should run successfully.

  1. Run simpleGL test sample applications.

    cd ${HOME}/cuda-samples/bin/aarch64/linux/release
    ./simpleGL
    

Expected Result: No error/failure should be observed, and the sample application should run successfully.

  1. Run boxFilter test sample applications.

    cd ${HOME}/cuda-samples/bin/aarch64/linux/release
    ./boxFilter
    

Expected Result: No error/failure should be observed, and the sample application should run successfully.

  1. Run nbody test sample applications.

    cd ${HOME}/cuda-samples/bin/aarch64/linux/release
    ./nbody
    

Expected Result: No error/failure should be observed, and the sample application should run successfully.

  1. Run smokeParticles test sample applications.

    cd ${HOME}/cuda-samples/bin/aarch64/linux/release
    ./smokeParticles
    

Expected Result: No error/failure should be observed, and the sample application should run successfully.

  1. Run particles test sample applications.

    cd ${HOME}/cuda-samples/bin/aarch64/linux/release
    ./particles
    

Expected Results: No error/failure should be observed, and sample application should run successfully.

  1. Run FDTD3d test sample applications.

    cd ${HOME}/cuda-samples/bin/aarch64/linux/release
    ./FDTD3d
    

Expected Result: No error/failure should be observed, and the sample application should run successfully.

  1. Run simpleCUBLAS test sample applications.

    cd ${HOME}/cuda-samples/bin/aarch64/linux/release
    ./simpleCUBLAS
    

Expected Result: No error/failure should be observed, and the sample application should run successfully.

  1. Run batchCUBLAS test sample applications.

cd ${HOME}/cuda-samples/bin/aarch64/linux/release
./batchCUBLAS

Expected Result: No error/failure should be observed, and the sample application should run successfully.

  1. Run .simpleCUFFT test sample applications.

cd ${HOME}/cuda-samples/bin/aarch64/linux/release
./simpleCUFFT

Expected Result: No error/failure should be observed, and the sample application should run successfully.

  1. Run MersenneTwisterGP11213 test sample applications.

cd ${HOME}/cuda-samples/bin/aarch64/linux/release
./MersenneTwisterGP11213

Expected Result: No error/failure should be observed, and the sample application should run successfully.

Run cuDNN Samples

The following instructions to build and run the cuDNN samples can be executed natively on the target or within the JetPack container.

  1. Build and run the Converted sample.

    cd /usr/src/cudnn_samples_v8
    cd conv_sample
    sudo make -j8
    
    sudo chmod +x run_conv_sample
    sudo ./run_conv_sample.sh
    
  2. Build and run the mnistCUDNN sample.

    cd /usr/src/cudnn_samples_v8
    cd mnistCUDNN
    sudo make -j8
    
    sudo chmod +x mnistCUDNN
    sudo ./mnistCUDNN
    

Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after the compilation and run without issues. Refer to https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#verify for more information.

TensorRT Samples

The following instructions to build and run the TensorRT samples can be executed natively on the target or within the JetPack container.

  1. Build TensorRT samples.

    mkdir ${HOME}/tensorrt-samples
    ln -s /opt/nvidia/tensorrt/data ${HOME}/tensorrt-samples/data
    cp -a /opt/nvidia/tensorrt/samples ${HOME}/tensorrt-samples/
    cd ${HOME}/tensorrt-samples/samples
    make clean
    make
    

Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after the compilation. Refer to https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html for more information.

Run the TRT sample (sample algorithm selector)

  1. Run the following command.

    cd ${HOME}/tensorrt-samples/bin
    ./sample_algorithm_selector
    

Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html for more information.

Run TRT sample (sample_onnx_mnist)

  1. Run the following command.

    cd ${HOME}/tensorrt-samples/bin
    ./sample_onnx_mnist
    

Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html for more information.

Run a TRT sample (sample_onnx_mnist useDLACore=0)

  1. Run the following command.

    cd ${HOME}/tensorrt-samples/bin
    ./sample_onnx_mnist --useDLACore=0
    

Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues.

Run a TRT sample (sample_onnx_mnist useDLACore=1)

  1. Run the following command.

    cd ${HOME}/tensorrt-samples/bin
    ./sample_onnx_mnist --useDLACore=1
    

Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues.

TRT + MM

Test video decode and TensorRT object detection with output rendered to the display.

  1. Run one of the following commands.

    gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux width=1280 height=720 name=m batch-size=1 !  nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch_size=1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvdsosd process-mode=1 ! nv3dsink sync=0 2> /dev/null
    
    gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux width=1280 height=720 name=m batch-size=1 !  nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch_size=1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvdsosd process-mode=1 ! nvegltransform ! nveglglessink sync=0 2> /dev/null
    

Expected Result: Test video decoding and TensorRT object detection with output rendered to the display should be successful without any corruption or noise.

MM Samples

Before you begin, ensure that MM API samples are available on the device.

Checking the Compilation and Running of the video_convert App

  1. Run the following command.

    $ cd /usr/src/jetson_multimedia_api/samples/07_video_convert
    $ sudo make
    
  2. Run the following command.

    $ sudo ./video_convert <in-file> <in-width> <in-height> <in-format> <out-file-prefix> <out-width> <out-height> <out-format> [OPTIONS]
    

    For example, sudo ./video_convert ../../data/Picture/nvidia-logo.yuv 1920 1080 YUV420 test.yuv 1920 1080 YUYV

    Note

    The video_convert sample consumes a YUV file. If you do not have a YUV file, use the jpeg_decode sample to generate one. For example, run the following command:

    $ cd jetson_multimedia_api/samples/06_jpeg_decode/ $ sudo ./jpeg_decode num_files 1 ../../data/Picture/nvidia-logo.jpg ../../data/Picture/nvidia-logo.yuv

Expected Result: No error/failure should be observed on compilation, and the executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/jetson/archives/r35.4.1/ApiReference/l4t_mm_07_video_convert.html for more information.

Check the Compilation and Run the Backend App

  1. Run the following command.

    $ cd /usr/src/jetson_multimedia_api/samples/backend
    $ sudo make
    
  2. Run the following command.

    $ sudo ./backend 1 ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 \
    --trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt \
    --trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel \
    --trt-mode 0 --trt-proc-interval 1 -fps 10
    

Expected Result: No error/failure should be observed on compilation and specific executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/jetson/archives/r36.2/ApiReference/l4t_mm_backend.html for more information.

Check the Compilation and Run the video_encode App

  1. Run the following command.

    $ cd /usr/src/jetson_multimedia_api/samples/01_video_encode
    $ sudo make
    
  2. Run the following command.

    $ sudo video_encode <in-file> <in-width> <in-height> <encoder-type> <out-file> [OPTIONS]
    

    For example, sudo ./video_encode ../../data/Video/sample_outdoor_car_1080p_10fps.yuv 1920 1080 H264 sample_outdoor_car_1080p_10fps.h264.

Expected Result: No error/failure should be observed on compilation and specific executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/jetson/archives/r36.2/ApiReference/l4t_mm_01_video_encode.html for more information.

Check the Compilation and Run the video_decode App

  1. Run the following command.

    $ cd /usr/src/jetson_multimedia_api/samples/00_video_decode
    $ sudo make
    
  2. Run the following command.

    $ sudo ./video_decode <in-format> [options] <in-file>
    

    For example, ./video_decode H264 ../../data/Video/sample_outdoor_car_1080p_10fps.h264

Expected Result: No error/failure should be observed on compilation and specific executable binary should appear after compilation and run without issues. Refer to https://docs.nvidia.com/jetson/archives/r35.4.1/ApiReference/l4t_mm_00_video_decode.html for more information.

Complete Pipeline: Inferencing

[Jetson] Classifying Images with ImageNet (googlenet,caffe)

  1. Flash the device with the test image.

  2. Install the JetPack components.

  3. Build the project on the device from the source (https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md).

    The repository for TensorRT-accelerated deep learning networks for image recognition, object detection with localization (for example, bounding boxes), and semantic segmentation will be downloaded. Various pre-trained DNN models are automatically downloaded.

    $ sudo apt-get update
    $ sudo apt-get install git cmake libpython3-dev python3-numpy
    $ git clone --recursive https://github.com/dusty-nv/jetson-inference
    $ cd jetson-inference
    $ mkdir build
    $ cd build
    $ cmake ../
    $ make
    $ sudo make install
    $ sudo ldconfig
    

    Refer to https://github.com/dusty-nv/jetson-inference/blob/master/docs/jetpack-setup-2.md for more information.

    $ cd jetson-inference/build/aarch64/bin
    $ sudo python3.6 ./imagenet-console.py --network=googlenet images/orange_0.jpg output_0.jpg # --network flag is optional, default is googlenet
    

    Note

    The first time you run each model, TensorRT will take a few minutes to optimize the network. The optimized network file is cached to disk, so future runs using the model will load faster.

Expected Result: The installation should complete without any issues, and inferencing should give us expected output. For example, the image is recognized as orange (class #950) with 97.900391% confidence. Refer to https://github.com/dusty-nv/jetson-inference for more information.

[Jetson] Running the Live Camera Recognition Demo with ImageNet (googlenet,caffe)

Before you begin, ensure that the Ubuntu Desktop with the graphical desktop packages is installed.

  1. Flash device with the test image.

  2. Connect the camera to the device.

  3. Install the JetPack components.

  4. Build the project on device from the source (refer to https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md for more information).

    The repository for TensorRT-accelerated deep learning networks for image recognition, object detection with localization (for example, bounding boxes), and semantic segmentation will be downloaded. Various pre-trained DNN models are automatically downloaded.

    $ cd $HOME
    $ sudo apt-get update
    $ sudo apt-get install git cmake libpython3-dev python3-numpy
    $ git clone --recursive https://github.com/dusty-nv/jetson-inference
    $ cd jetson-inference
    $ mkdir build
    $ cd build
    $ cmake ../
    $ make
    $ sudo make install
    $ sudo ldconfig
    

    Refer to https://github.com/dusty-nv/jetson-inference/blob/master/docs/jetpack-setup-2.md for more information.

  5. Navigate to $HOME/jetson-inference/build/aarch64/bin#.

    $ sudo python3.6 ./imagenet-camera --network=resnet-18 # using ResNet-18, default MIPI CSI camera (1280x720)
    
  6. Interrupt the test.

  7. Run for 5 minutes.

Expected Result: The installation should complete without any issues, and inferencing should give us expected output. In this case, it is:

class 0400 - 0.021591 (academic gown, academic robe, judge's robe)
class 0413 - 0.025543 (assault rifle, assault gun)
class 0526 - 0.023438 (desk)
class 0534 - 0.011513 (dishwasher, dish washer, dishwashing machine)
class 0592 - 0.027084 (hard disc, hard disk, fixed disk)
class 0667 - 0.238525 (mortarboard) To be tested

DeepStream Test Apps

Run the DeepStream Test Apps

  1. To achieve best performance set the max clock settings

      cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-decode-test
      deepstream-image-decode-app /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4
      cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1
      deepstream-test1-app ./dstest1_config.yml 2> /dev/null
      cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2
      deepstream-test2-app ./dstest2_config.yml 2> /dev/null
      cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3
      deepstream-test3-app ./dstest3_config.yml 2> /dev/null
    
    .. note::
    
      Messages such as **WARNING: Deserialize engine failed because file path: <engine-name> open error** are expected for engines that are not present.
    

Test the Secondary gstreamer Inference Engine (SGIE)

  1. Avoid dropping frames during playback.

    sudo sed -i 's/sync=1/sync=0/' /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
    
  2. To achieve the best performance, set the max clock settings.

    sudo jetson_clocks
    
    deepstream-app -c /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 2> /dev/null
    

    Note

    Messages such as WARNING: Deserialize engine failed because file path: <engine-name> open error are expected for engines that are not present.

Test 30 Streams Video Decode and TensorRT Object Detection with Output Rendered to the Display

  1. To avoid dropping frames during playback, run the following command.

    sudo sed -i 's/sync=1/sync=0/' /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt
    
  2. To achieve best performance set the max clock settings, run the following command.

    sudo jetson_clocks
    
    deepstream-app -c /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt 2> /dev/null
    
  3. The perf rate display should be approximately 21fps.

    **PERF:  21.39 (21.16)  21.39 (21.09)   21.39 (21.09)   21.39 (21.09)   21.39 (21.09)   21.39 (21.05)   21.39 (21.09)   21.39 (21.09)   21.39 (21.16)   21.39 (21.09)   21.39 (21.05)   21.39 (21.10)   21.39 (21.16)   21.39 (21.09)   21.39 (21.09) 21.39 (21.09)   21.39 (21.09)   21.39 (21.16)   21.39 (21.16)   21.39 (21.16)   21.39 (21.16)   21.39 (21.09)   21.39 (21.09)   21.39 (21.09)   21.39 (21.16)   21.39 (21.09)   21.39 (21.16)   21.39 (21.16)   21.39 (21.09)   21.39 (21.09)
    

Clicking the left mouse button on a video stream zooms into that stream and clicking the right mouse button zooms back out again. The frame-rate should be greater than 20fps (as shown below) for Jetson AGX Xavier and around 13-14fps for Jetson Xavier NX. Messages such as WARNING: Deserialize engine failed because file path: <engine-name> open error are expected for engines that are not present.

Triton

Run the Sample DeepStream Triton Application

  1. Install Triton.

    cd /opt/nvidia/deepstream/deepstream/samples
    sudo ./prepare_ds_triton_model_repo.sh
    sudo apt -y install ffmpeg
    sudo ./prepare_classification_test_video.sh
    sudo ./triton_backend_setup.sh
    
  2. Remove the GStreamer cache and verify that the nvinterserver is present.

    rm ~/.cache/gstreamer-1.0/registry.aarch64.bin
    gst-inspect-1.0 nvinferserver
    
  3. Run the sample DeepStream Triton application.

    export DISPLAY=<local-display>
    cd /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton
    deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt 2> /dev/null
    

Expected Release: The DeepStream Triton application should run successfully.

Jetson AI Benchmarks

Run the Jetson AI Benchmarks

  1. Run the following commands.

    cd ${HOME}
    git clone https://github.com/NVIDIA-AI-IOT/jetson_benchmarks.git
    cd jetson_benchmarks
    mkdir models
    sudo sh install_requirements.sh
    
  2. Ensure the fan service is running.

    sudo systemctl restart nvfancontrol.service
    
    python3 utils/download_models.py --all --csv_file_path ./benchmark_csv/orin-benchmarks.csv --save_dir ${HOME}/jetson_benchmarks/models
    
  3. Set Orin to the maximum power mode and reboot the device when prompted.

    sudo nvpmodel -m 0
    
    cd ${HOME}/jetson_benchmarks
    sudo python3 benchmark.py --all --csv_file_path ./benchmark_csv/orin-benchmarks.csv --model_dir ${HOME}/jetson_benchmarks/models --jetson_clocks
    

Expected Result: The reference TensorRT performance numbers for Jetson AGX Orin are published by NVIDIA.

VPI

Before you begin:

  • VPI should be installed on device and sample application should be available on device rootfs.

    ubuntu@tegra-ubuntu:~$ dpkg -l | grep -i vpi
    ii  libnvvpi2                                  2.4.1                                             arm64        NVIDIA Vision Programming Interface library
    ii  vpi2-demos                                 2.4.1                                             arm64        NVIDIA VPI GUI demo applications
    ii  vpi2-dev                                   2.4.1                                             arm64        NVIDIA VPI C/C++ development library and headers
    ii  vpi2-samples                               2.4.1                                             arm64        NVIDIA VPI command-line sample applications
    
  • Provide sudo/root permissions to users and compile the sample application or log in with root.

  1. Run the 2D Image Convolution sample.

    cd /opt/nvidia/vpi3/samples/01-convolve_2d
    
  2. Build the C++ sample.

    sudo cmake
    sudo make.
    
  3. Run the C++ sample.

sudo ./vpi_sample_01_convolve_2d cpu ../assets/kodim08.png
  1. View the output (refer to https://docs.nvidia.com/vpi/sample_conv2d.html).

    eog edges_cpu.png
    
  2. Run the Python sample.

    sudo python3 main.py cpu ../assets/kodim08.png
    
  3. View the output (refer to https://docs.nvidia.com/vpi/sample_conv2d.html).

    eog edges_python3_cpu.png
    

Expected Result: The 2D Image Concolution sample should run successfully. Refer to https://docs.nvidia.com/vpi/sample_conv2d.html for more information.

Running the Stereo Disparity Sample

  1. Run the following command.

    cd /opt/nvidia/vpi3/samples/02-stereo_disparity
    
  2. Build the C++ sample.

    sudo cmake
    sudo make
    
  3. Run the C++ sample.

    sudo ./vpi_sample_02_stereo_disparity cuda ../assets/chair_stereo_left.png ../assets/chair_stereo_right.png
    
  4. View the outputs (refer to https://docs.nvidia.com/vpi/sample_stereo.html).

    eog confidence_cuda.png
    eog disparity_cuda.png
    
  5. Run the Python sample.

    sudo python3 main.py cuda ../assets/chair_stereo_left.png ../assets/chair_stereo_right.png
    
  6. View the output (refer to https://docs.nvidia.com/vpi/sample_stereo.html).

    eog confidence_python3_cuda.png
    eog disparity_python3_cuda.png
    

Expected Result: The Stereo Disparity sample should run successfully. Refer to https://docs.nvidia.com/vpi/sample_stereo.html for more information.

Run the Harris Corners Detector Sample that Uses the PVA

  1. Run the following command.

    cd /opt/nvidia/vpi3/samples/03-harris_corners
    
  2. Build the C++ sample.

    sudo cmake
    sudo make
    
  3. Run the C++ sample.

    sudo ./vpi_sample_03_harris_corners pva ../assets/kodim08.png
    
  4. View the output (refer to https://docs.nvidia.com/vpi/sample_harris_detector.html).

    eog harris_corners_pva.png
    
  5. Run the python sample.

    sudo python3 main.py pva ../assets/kodim08.png
    
  6. View the output (refer to https://docs.nvidia.com/vpi/sample_harris_detector.html).

    eog harris_corners_python3_pva.png
    

Expected Result: The Harris Corners Detector sample that is using the PVA should run successfully.

Run the KLT Bounding Box Tracker

  1. Run the following command.

    cd  /opt/nvidia/vpi3/samples/06-klt_tracker
    
  2. Build the C++ sample.

    sudo cmake .
    sudo make
    
  3. Run the C++ sample.

    sudo ./vpi_sample_06_klt_tracker cuda ../assets/dashcam.mp4 ../assets/dashcam_bboxes.txt
    
  4. Play the output video (refer to https://docs.nvidia.com/vpi/sample_klt_tracker.html).

  5. Run the python sample.

    sudo python3 main.py cuda ../assets/dashcam.mp4 ../assets/dashcam_bboxes.txt
    
  6. Play the output video (refer to https://docs.nvidia.com/vpi/sample_klt_tracker.html).

Expected Result: The KLT Bounding Box Tracker should be successful. Refer to https://docs.nvidia.com/vpi/sample_harris_detector.html for more information.

Run the Temporal Noise Reduction

  1. Run the following command.

    cd  /opt/nvidia/vpi3/samples/09-tnr
    
  2. Build the C++ sample:

    sudo cmake
    sudo make
    
  3. Run the C++ sample.

    sudo ./vpi_sample_09_tnr cuda ../assets/noisy.mp4
    
  4. Play the output video (refer to https://docs.nvidia.com/vpi/sample_tnr.html).

  5. Run the python sample.

    sudo python3 main.py cuda ../assets/noisy.mp4
    
  6. Play the output video (refer to https://docs.nvidia.com/vpi/sample_tnr.html).

Expected Result: Running the Temporal Noise Reduction should be successful. Refer to https://docs.nvidia.com/vpi/sample_tnr.html for more information.

Run the Perspective Warp

  1. Run the following command.

    cd  /opt/nvidia/vpi3/samples/10-perspwarp
    
  2. Build the C++ sample.

    sudo cmake
    sudo make
    
  3. Run the C++ sample.

    sudo ./vpi_sample_10_perspwarp cuda ../assets/noisy.mp4
    
  4. Play the output video (refer to https://docs.nvidia.com/vpi/sample_perspwarp.html).

  5. Run the python sample.

    sudo python3 main.py cuda ../assets/noisy.mp4
    
  6. Play the output video (refer to https://docs.nvidia.com/vpi/sample_perspwarp.html).

Expected Result: The Perspective Warp should be successful. Refer to https://docs.nvidia.com/vpi/sample_background_subtractor.html for more information.

Run the Background Subtractor

  1. Run the following command.

    cd /opt/nvidia/vpi3/samples/14-background_subtractor
    
  2. Build the C++ sample.

    sudo cmake
    sudo make
    
  3. Run the C++ sample.

    sudo ./vpi_sample_14_background_subtractor cpu ../assets/pedestrians.mp4
    
  4. Play the output video (refer to https://docs.nvidia.com/vpi/sample_background_subtractor.html).

  5. Run the python sample.

    sudo python3 main.py cpu ../assets/pedestrians.mp4
    
  6. Play the output video (refer to https://docs.nvidia.com/vpi/sample_background_subtractor.html).

Expected Result: Background Subtractor should be successful. Refer to https://docs.nvidia.com/vpi/sample_background_subtractor.html for more information.