Docker User Guide

It is recommended for the host system to have Ubuntu 18.04 (64-bit) installed, with at least 200GB of disk space, and a network connection. For requirements to run Docker Engine, see the Docker website:
Note:
Starting from the 5.2.6.0 release, Docker tar balls will be deprecated and will no longer be posted on NVONLINE.
The following images will be available on NGC in this release:
File Name
Intent
nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-oss-source-x86:5.2.6.0
Debian source packages used in Docker images.
nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-build-x86:5.2.6.0
Build DRIVE OS SDK Linux and flash.
nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-flash-x86:5.2.6.0
Only flash DRIVE OS SDK Linux.

Downloading and Installing Docker

Use the following procedure to download and install Docker.

Download and Install Docker

On the host system, download the type of Docker software needed for your organization (Enterprise, Desktop, or other) from https://www.docker.com/.
 
If you downloaded Docker as a Debian package (.deb), install Docker using dpkg with a command similar to the following, where <docker.deb> is replaced by the actual filename of the download:
dpkg -i <docker.deb>
Alternatively, download an installation script from https://get.docker.com/ and install Docker on your host system with a command similar to the following:
wget -qO- https://get.docker.com/ | ${SHELL}
Note:
The minimum required version of Docker Engine to run DRIVE OS Linux Docker containers is 19.03.

Create an NVIDIA DRIVE OS Directory

Create or select a directory for NVIDIA DRIVE OS source code. This directory is to be mounted into the Docker container.
The ${WORKSPACE} directory is used in the examples in this document. Ensure that the relevant NVIDIA DRIVE OS is available in this directory. See the Development Guide for the SDK or PDK you are containerizing which has instructions to obtain NVIDIA DRIVE OS using SDK Manager.

NVIDIA GPU Cloud (NGC)

NVIDIA GPU Cloud Access

In order to access NGC (http://ngc.nvidia.com), you will need an NVIDIA Developer Account (http://developer.nvidia.com) and membership in the NVIDIA DRIVE® Developer Program for DRIVE AGX. Please register for an account and apply for membership in the Developer Program before proceeding.
Note:
It may take up to 24 hours for access to the Developer Program to be approved.

Sign into NVIDIA GPU Cloud

1. On the host system, sign into NGC (https://ngc.nvidia.com) using your NVIDIA Developer credentials.
2. Once you have signed in, select the ‘drive’ organization. DRIVE Platform Docker Containers are located under “PRIVATE REGISTRY”.
3. Once signed in, select “Setup” under the User menu in the top-right of the page to Generate an API Key in order to pull Docker images.
Note:
Using NVIDIA GPU Cloud is beyond the scope of this document. Contact NVIDIA for more information.
 
4. From the command line on your host system, log on to the NVIDIA GPU Cloud with your username and API key with the following command. Replace ${YOUR_USERNAME} and ${YOUR_API_KEY} with username and API key respectively.
sudo docker login -u="${YOUR_USERNAME}" -p "${YOUR_API_KEY}" nvcr.io
Note:
${YOUR_USERNAME} is commonly $oauthtoken. Please note the $ is part of the username.
See Docker documentation for more information on login methods.
After you log in, you have access to NVIDIA Docker images, depending on your specific permissions in the registry.

Pulling Docker Images from NGC

Pull Docker Images

1. Log into the NVIDIA Docker Registry using instructions in the previous section.
This example pulls and uses the following image:
nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-build-x86:${SW_VERSION}
2. On the host system, pull the image using the following command:
sudo docker pull nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-build-x86:${SW_VERSION}
3. After successfully pulling the Docker image, start the Docker container with the following command on the host system:
sudo docker run -it --privileged --net=host -v /dev/bus/usb:/dev/bus/usb -v ${WORKSPACE}:/home/nvidia/ nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-build-x86:${SW_VERSION}
Note:
--privileged allows access to all hosts devices (e.g., /dev/ttyUSB?? for AURIX and Xavier ports for flashing the board connected to host USB port).
 
Note:
sudo is required for USB/Internet access, which may be required for some activities such as flashing.

Building DRIVE OS Linux SDK

Use the procedures in this chapter to build the NVIDIA DRIVE OS Linux SDK in the Docker container.

Start the Docker Container

Log on to the NVIDIA Docker Registry. If you need to download a Docker image, use the procedures in the NVIDIA Docker Registry section of this document.
Start the Docker container with the following command on the host system:
sudo docker run -it --privileged --net=host -v ${WORKSPACE}:/home/nvidia/ nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-build-x86:${SW_VERSION}
where ${SW_VERSION} is the version of the NVIDIA DRIVE OS. ${WORKSPACE} is the path containing the code to be built inside Docker container.
Note:
Enter all subsequent commands at the Docker contained command prompt.

Compile Using the Docker Container

Compile the source code using one of the following supported toolchains:
Toolchain
Purpose
/drive/toolchains/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-gcc
Kernel User-space binaries
/usr/local/cuda/bin/nvcc
CUDA
/drive/toolchains/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-g++
TensorRT, cuDNN

Example: Compile CUDA

1. Check that the CUDA samples are in /usr/local/cuda-10.2/samples.
2. Determine the target architecture and compile the samples with the corresponding command from the table below. In the commands in the table below SMS=72 is for compilation for Xavier. For Turing, use SMS=75.
Architecture
Make Command
x86
make -C /usr/local/cuda-10.2/samples/
AARCH64
make -C /usr/local/cuda-10.2/samples/ TARGET_ARCH=aarch64 SMS=72
For samples that require targetfs libraries, use
make -C /usr/local/cuda-10.2/samples/ TARGET_ARCH=aarch64 SMS=72 TARGET_FS=/drive/drive-t186ref-linux/targetfs/

Example: Compile TensorRT/cuDNN

1. Check that the TensorRT samples are in /usr/src/tensorrt/samples/.
2. Compile the samples with the corresponding command from the following table:
Architecture
Make Command
x86
make -C /usr/src/tensorrt/samples/
AARCH64
make -C /usr/src/tensorrt/samples/ CC=/drive/toolchains/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-g++ TARGET=aarch64

Flashing DRIVE OS Linux

Use the procedures in this section to flash NVIDIA DRIVE OS Linux to the target system from the Docker container.

Flash Using the Docker Container

1. Log on to the NVIDIA Docker Registry. If you need to download a Docker image, use the procedures in the NVIDIA Docker Registry section of this document.
2. Connect NVIDIA DRIVE™ AGX to the host system.
Note:
Ensure that NVIDIA DRIVE AGX Pegasus is connected to the host system, and that no other processes, such as TCUMuxer or Minicom are holding a lock on /dev/ttyUSB* before starting the Docker container.
3. Start the Docker container and flash a Hypervisor + Linux configuration onto the NVIDIA DRIVE AGX using the following command. Replace ${SW_VERSION} with the version of NVIDIA DRIVE OS, as specified in the Release Notes.

sudo docker run -it --privileged --net=host -v /dev/bus/usb:/dev/bus/usb -v /drive_flashing:/drive_flashing nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-flash-x86:${SW_VERSION}
 
You may add the --rm argument to the docker run command to remove a container automatically upon exit.
${SW_VERSION} is the version of NVIDIA DRIVE OS.
Upon execution of the docker run command, a command prompt will open. Execute following command to start flashing the target.
./flash.sh
You can specify Tegra, Aurixport, Bootchains and PCT_VARIANTS used for flashing the target by adding following parameters to flash command:
Option
Default Value
Range of Values
Usage
--tegra=X
AB
AB, A, B
Flash both Tegra A and Tegra B, or flash Tegra A only, or flash Tegra B only
--aurixport=X
Aurix Port is autodetected
/dev/ttyUSBX
(where /dev/ttyUSBX is a valid Aurix port)
Host may be connected to multiple targets. So Aurix Port should be specified to correctly identify the target to be flashed.
--full=X
true
true, false
 
true: Flash both bootchains
false: Flash single bootchain
--pct_variant=X
dev
 
 
 
Note:
Flashing scripts may not update the firmware of some devices, and do not perform any post-flash verification.
 
Note:
“dev” is the only PCT variant that is expected to work.
flash.sh will use the “dev” PCT variant by default. Do not specify --pct_variant argument for this.
 
Note:
--full=false will only work when the binaries in recovery partition has not changed since the last flash.
Example: Flash with flash.sh
Command
Explanation
./flash.sh
Flash with default arguments. This will flash both Tegra A and B, select the default Aurix Port of /dev/ttyUSB3, flash both bootchains.
./flash.sh -–tegra=A
Flash with all default arguments, except this will flash only Tegra A.

Troubleshooting

Error
Resolution
The following error is seen during flashing:
Loop device mount failed
FlashTegraPDKInstaller failed, error 144!
Flash process exited with Error 53
Ensure that loop devices are available prior to starting the container. Use the following command:
sudo losetup -f && sudo docker run <…>

Finalizing DRIVE AGX System Setup

1. Connect Console to the Target.
To complete the setup of the DRIVE AGX platform, the user MUST connect a console to the system to answer system configuration prompts. Instructions for connecting a terminal emulator to the platform are in the DRIVE OS 5.2 Linux Developer Guide sections Using tcu_muxer and Terminal Emulation. The customer is recommended to use the “putty” terminal emulation program for better display of the setup prompts.
2. Select SSH Profile and Other Setup Options. 
DRIVE OS Linux provides two profiles for security setup (including SSH): an NVIDIA enhanced security profile that uses ECDSA-based algorithms for SSH security, or stock Ubuntu 18.04 configuration. The customer must select the profile on the first setup screen. Both profiles will enable SSH server in the target by default. 
The user can follow the other prompts after SSH profile selection to install additional users; see DRIVE OS Linux oem-config for more information. After these prompts are completed, the platform will boot to the prompt to enter the username.
3. Enable Display.
To maximize the compute capacity of the DRIVE AGX Platform, DRIVE OS release does not include a Linux Desktop by default. To enable use of the display or to install a desktop, do one of the following:
1. To enable display without a desktop, start the X server as below. See Manually Starting X Server for more information.
$ sudo -b X -ac -noreset -nolisten tcp
2. To install the Ubuntu desktop, use the following instructions. See Installing GUI on the Target for more information.
$ sudo apt-get update
 
$ sudo apt-get install gdm3 ubuntu-unity-desktop

Appendix A: Updating Linux File System with Optional Features

This section describes how to add additional files to Linux Target FS.
1. Start the Docker container with folder containing optional runfiles mounted:
sudo docker run -it --privileged --net=host -v /dev/bus/usb:/dev/bus/usb -v ${WORKSPACE}:/home/nvidia/ nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-flash-x86:${SW_VERSION}
where:
${SW_VERSION} is the version of NVIDIA DRIVE OS, and
${WORKSPACE} contains the run files to be installed in the container.
2. To install the optional run file, extract it with the following command:
bash /home/nvidia/${FILE_NAME} --target /drive/
Supported optional run files are as follows:
drive-t186ref-linux-*-dds.run
Please replace ${FILE_NAME} with the name of the .run file.
3. Flash the target as described in the section Flash Using the Docker Container.

Open-Source License Compliance

Use the procedures in this chapter to obtain the Debian source package of open source utilities and libraries used by various Docker images. These source packages provide a way to view the source code and compile the desired piece of software.

Obtain Source for Utilities and Libraries

1. Log on to the NVIDIA Docker Registry. If you need to download a Docker image, use the procedures in the NVIDIA Docker Registry section of this document.
2. nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-oss-source-x86:${SW_VERSION} includes the code of open source packages used in the following Docker images:
nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-build-x86:${SW_VERSION}
nvcr.io/drive/driveos-pdk/drive-agx-xavier-linux-aarch64-pdk-build-x86:${SW_VERSION}
nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-flash-x86:${SW_VERSION}
Start the Docker container with the following command on the host system:
sudo docker run -it --privileged --net=host -v ${WORKSPACE}:/home/nvidia/ nvcr.io/drive/driveos-sdk/drive-agx-xavier-linux-aarch64-sdk-oss-source-x86:${SW_VERSION}
where ${SW_VERSION} is the version of the NVIDIA DRIVE OS. The source packages are in the /drive directory.