PyTorch for Jetson Platform

This document describes the key features, software enhancements and improvements, and known issues regarding PyTorch 1.13.0a0+936e930 on the Jetson platform.

Key Features and Enhancements

This release includes the following key features and enhancements.
  • The TF32 numerical format is enabled by default for cuBLAS and cuDNN operations on Ampere GPUs starting with the 22.06 release. If you encounter training issues especially for regression, generative or higher-order models, or by using TF32 operations in pre- or post-processing steps, try to disable TF32 by setting the following:

    torch.set_float32_matmul_precision('highest')

Compatibility

Table 1. PyTorch compatibility with NVIDIA containers and Jetpack
PyTorch Version NVIDIA Framework Container JetPack Version
1.13.0a0+936e930 22.11 5.0.2
1.13.0a0+d0d6b1f 22.10, 22.09
1.13.0a0+08820cb 22.07
1.13.0a0+340c412 22.06 5.0.1
1.12.0a0+8a1a93a9 22.05 5.0
1.12.0a0+bd13bc66 22.04
1.12.0a0+2c916ef 22.03
1.11.0a0+bfe5ad28 22.01 4.6.1

Using PyTorch with the Jetson Platform

Storage

If you need more storage, we recommend connecting an external SSD via SATA on TX2 or Xavier devices, or USB on Jetson Nano.

Known Issues

  • If you receive a CUPTI_ERROR_INSUFFICIENT_PRIVILEGES error while profiling your code, run the script via sudo or ensure that your current user has the appropriate permissions to run CUPTI profiling.

  • Building custom CUDA extensions may break due to symbol leaking. This will be fixed in a future release.

  • A functional regression might be observed on Orin devices when calling into torch.linalg.ldl_solve showing a memory violation.