Developer Guide Release

Foundation Services Stack
Additional Platform Components
The NVIDIA DRIVE™ AGX platform Foundation services runtime software stack provides the infrastructure for all the components of the platform. With this infrastructure, multiple guest operating systems can run on the hardware, with the Hypervisor managing use of hardware resources.
A screenshot of a cell phone Description automatically generated
Foundation Components
The foundation services components are as follows.
Trusted Software server that separates the system into partitions. Each partition can contain an operating system or a bare-metal application. The Hypervisor manages:
• Guest OS partitions and the isolation between them.
• Partitions’ virtual views of the CPU and memory resources.
• Hardware interactions
• Run-lists
• Channel recovery
Hypervisor is optimized to run on the ARMv8.2 Architecture.
I/O Sharing Server
Allocates peripherals that Guest OS need to control.
Virtualizes storage devices on SoC.
Components that manage isolated hardware services for:
• Graphics and computation (GPU)
• Codecs (MM)
• Display, video capture and audio I/O (AVIO)
Partition Monitor
Monitors partitions, including:
• Dynamically loads/unloads and starts/stops partitions and virtual memory allocations.
• Logs partition use.
• Provides interactive debugging of guest operating systems.
Starting in DRIVE OS, the Guest OS is assigned 7 dedicated CPU cores and the other VMs depicted are assigned to 1 core. Earlier releases had all functions shared across all 8 cores. Previous releases had all functions shared across all 8 cores (prior to or across 6 cores to Guest and 2 cores to VM servers in
RT Mode: During the normal behavior of Carmel Dynamic Code Optimization (DCO), optimizer activity may result in system interrupt processing being delayed by up to 1 ms. In systems for which a 1 ms interrupt delay is not acceptable, configure the Carmel Real Time mode (RT mode).
In RT mode, the PCT configures Carmel optimizer scheduling to ensure smaller delays before interrupt processing, and stronger assurances that interrupt processing itself has adequate processing time.
Additional Platform Components
In addition to the Foundation services components and the OS specific components, additional components are available. These components are provided separately for customizing the platform development and include:
The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA Deep Learning SDK.
Consult the cuDNN Deep Neural Network Library of primitives for deep neural network development.
NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications.
Consult the TensorRT Documentation for deep learning development.
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Consult the CUDA Samples provided as an educational resource.
Consult the CUDA Computing Platform Development Guide for general purpose computing development.