NVIDIA Tegra
NVIDIA DRIVE OS 5.1 Linux SDK

Developer Guide
5.1.9.0 Release


 
Foundation Virtualization Stack
 
Virtualization Partition Configuration Capabilities
The NVIDIA DRIVE™ AGX Platform Foundation Services framework provides virtualization stack technology that enables running multiple operating system stacks with different security and safety requirements on a single device.
The framework virtualization stack of components are as follows:
Command
Description
Hypervisor kernel
Provides implementation of the Virtualization features specific to the operating system.
Partition Configuration Table
A concatenated set of header files that represents a virtual configuration. The binary image of the partition configuration table is appended to the Hypervisor image. When loaded on the target platform, the concatenated image runs multiple guest OSs.
Partition Loader
Loads the Guest OS.
Monitor Partition
Maintains and monitors the per-guest health using the Watchdog Timer.
Resource Manager Server Partition
Manages the server partitions for the various virtualized component servers.
Boot and Power Manager Processor
Firmware that runs on Cortex R5. During boot, BPMP executes the boot ROM code and controls the SOC boot sequence. After boot, BPMP runs power management functions.
Audio
Provides an Audio Server that para-virtualizes the Audio Processing Engine (APE) of the Tegra device.
I2C
Allows multiple guests to access the same I2C controller without requiring prior information. Additionally, provides a framework to assign slaves to one or more guests.
Virtual System Configuration Storage
Manages the storage configuration files that are required for the flashing script to identify the hypervisor and guest partitions that must be flashed.
Security Engine
Enables para-virtualization of the Tegra SoC making it available to the software of a virtual machine through a similar virtualized interface.
Watchdog Timer
A framework that consists of a system-wide WDT monitor service, running in a privileged monitor partition, and one or more WDT clients, each running in a guest partition.
System Manager
Coordinates the ordering of state transition of each partition during a system state transition
Inter-VM Communication Infrastructure
Provides event and data exchange between the operating systems running on top of the hypervisor architecture.
The framework virtualization stack provides automotive designs that support:
Automotive Open System Architecture (AUTOSAR) and Audio Video Bridging (AVB)
Monitoring guest operating systems (OS) and enabling quick recovery
Controlling guest OS access to critical hardware, such as graphics hardware
Early video
Isolating resources between the main CPU complex and external CPU cores
Handling resource partitioning between in-vehicle infotainment (IVI) and cluster stacks
Enabling GPU and display sharing
IVI plus Instrument Cluster (IC)
IVI with Advanced Driver Assistance Systems (ADAS)
Handling resource partitioning between IVI, IC, and ADAS stacks
To support early IVI boot, the framework virtualization stack enables:
Specified functions to be available very early during boot.
Implementation of these functions outside the framework services stack.
Light-weight, early-boot partitions with full access to system resources and I/O.
Loading and running in parallel to the operating system loading and initialization.
Booting without complicated I/O sharing and handover procedures.
Using the same API functionality during runtime and early boot for better quality and a smaller footprint.
Virtualization Partition Configuration Capabilities
With the foundation virtualization stack, each guest OS runs in a dedicated partition that defines the system resources available to the operating system. Partitions are configured using the Partition Configuration Table (PCT). The PCT is defined by a set of header files that specify the properties of a virtual configuration.
The DRIVE Development Platform Foundation package virtualization solution is pre-configured to meet most guest OS requirements and configurations.