Welcome to DRIVE PX 2
Warning: | Consult the Release Notes before making any changes to the default configuration of the NVIDIA DRIVE™ PX 2 platform. |
Note: | You can get the Quick Start Guide on NVONLINE. Look for a PDF file for this release that includes the “QSG” abbreviation. |
Depending on what you want to do, we welcome you to read the related topics in the list below. Some links take you to NVIDIA developer content available on the Internet.
What do you want to do first?
This chapter provides a brief guide in the “Appendix” to help you become familiar with the navigation basics.
This chapter provides information about the DRIVE PX 2 architecture, high-level info about the HW and SW, such as Hypervisor, Guest OS, Ubuntu, NVIDIA drivers and NVIDIA tools.
This chapter provides guidance on setting up and connecting your AutoCruise (P3407) platform. This is the small form factor platform of the NVIDIA DRIVE™ PX 2 family designed to handle functions like highway automated driving and HD mapping. Read this chapter to learn how to setup, power on, and put AutoCruise (P3407) into recovery mode.
This chapter provides guidance on setting up and connecting your AutoChauffeur (P2379) platform. This platform configuration of the NVIDIA DRIVE™ PX 2 family has two SoCs and two discrete GPUs for point-to-point travel. Read this chapter to learn how to setup, power on, and put AutoChauffeur (P2379) into recovery mode.
This chapter provides information on our C-based, frame level API library that provides framework-agnostic and distinctive software components to realize various multimedia use case scenarios.
This chapter provides OpenGL ES programming tips and recommendations for managing binary shader programs.
This link takes you to the NVIDIA DriveWorks™ SDK pages for information on deep learning HD mapping and supercomputing solutions. The DriveWorks installation on the target is located at /usr/local/driveworks.
This link takes you to the NVIDIA® TensorRT™ pages for information on the NVIDIA deep learning inferencing engine. TensorRT is installed on the x86 Host by the SDK Manager application at /usr/local/nvidia/tensorrt/. Both x86 and aarch64 components for cross-compile are installed.
This link takes you to the NVIDIA® CUDA® Toolkit pages for information on a parallel computing platform and programming model with CUDA-enabled GPUs. The CUDA installation on the x86 host and target is located at /usr/local/cuda.
This link takes you to the Foundation Development Guide, which describes the Hypervisor and the Foundation partitions that together implement virtualization technology to run multiple guest operating systems on the Tegra processor.