Abstract

This guide provides instructions on how to install TensorFlow for Jetson AGX Xavier.

1. Overview

TensorFlow

TensorFlow™ is an open-source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code.

TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks (DNNs) research. The system is general enough to be applicable in a wide variety of other domains, as well.

Jetson AGX Xavier

NVIDIA Jetson AGX Xavier is the latest addition to the Jetson platform. It’s the world's first AI computer for autonomous machines, delivering the performance of a GPU workstation in an embedded module under 30W.

1.1. Benefits Of TensorFlow For Jetson AGX Xavier

Previously, installing TensorFlow for Jetson was complicated for a lot of users. It proved to be too difficult to install and to get it working with the latest version of TensorFlow, CUDA, and other NVIDIA GPU related libraries. Now, installing TensorFlow for Jetson AGX Xavier is streamlined with just a few commands.

Installing TensorFlow for Jetson AGX Xavier provides you with access to the latest version of the framework on a lightweight, mobile platform without being restricted to TensorFlow Lite.

2. Prerequisites And Dependencies

Before you install TensorFlow for Jetson AGX Xavier, ensure you:
  1. Install JetPack 4.1.1 Developer Preview.
    Note: JetPack 4.1.1 Developer Preview comes with both Python 2.7 and 3.6.
  2. Install HDF5 as required by TensorFlow:
    apt-get install libhdf5-serial-dev hdf5-tools
  3. Install pip.
    1. If you want to use Python 2.7, issue:
      sudo apt-get install python-pip
    2. If you want to use Python 3.6, issue:
      sudo apt-get install python3-pip
  4. Install the following packages:
    pip install -U pip
    sudo apt-get install zlib1g-dev zip libjpeg8-dev libhdf5-dev 
    sudo pip install -U numpy grpcio absl-py py-cpuinfo psutil portpicker grpcio six mock requests gast h5py astor termcolor
    

3. Installing TensorFlow

Install TensorFlow using the pip command that corresponds to your chosen version of Python. This command will install the latest version of TensorFlow.
$: pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v411 tensorflow-gpu
Note: Use pip3 if are using Python version 3.6.
If you want to install a specific version of TensorFlow, issue the following command:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v411 tensorflow-gpu==$TF_VERSION+nv$NV_VERSION
Where:
TF_VERSION
The released version of TensorFlow, for example, 1.12.0-rc2.
NV_VERSION
The monthly NVIDIA container version of TensorFlow, for example, 18.11.
For example, to install TensorFlow 18.11, the command would look similar to the following:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v411 tensorflow-gpu==1.12.0-rc2+nv18.11

4. Verifying The Installation

To verify that TensorFlow has been successfully installed on Jetson AGX Xavier, you’ll need to launch a Python prompt and import TensorFlow.
  1. From the terminal, run whichever Python version you've selected. For example:
    $: python<x>
    Where <x> is your version of Python. Python versions 2.7 and 3.6 are supported.
  2. Import TensorFlow:
    >>> import tensorflow

    If TensorFlow was installed correctly, this command should execute without error.

5. Best Practices

Performance model

It is recommended to choose the right performance mode to get the best possible performance given energy usage limitations. There is a command line tool (nvpmodel) that can be used to change the performance mode. In order to check the current performance mode, issue:
sudo nvpmodel -q --verbose
To change the mode to MAX-N, issue:
sudo nvpmodel -m 0

Swap space

Certain applications could run out of memory (16GB shared between CPU and GPU). This problem can be resolved by creating a swap partition on the external memory. Typically 4GB of swap space is enough.

6. Uninstalling

TensorFlow can easily be uninstalled using the pip uninstall command, where the version of pip corresponds to your version of Python.
$: pip uninstall -y tensorflow-gpu
Note: Use pip3 if are using Python version 3.6.

7. Troubleshooting

You can find the Jetson AGX Xavier board in the NVIDIA Embedded Computing Jetson and Embedded Systems forum. This forum offers the possibility of finding answers, making connections, and to get involved in discussions with customers, developers, and Jetson engineers. You can also discuss Jetson AGX Xavier specific issues on the Jetson AGX Xavier board.

Notices

Notice

THE INFORMATION IN THIS GUIDE AND ALL OTHER INFORMATION CONTAINED IN NVIDIA DOCUMENTATION REFERENCED IN THIS GUIDE IS PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE INFORMATION FOR THE PRODUCT, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the product described in this guide shall be limited in accordance with the NVIDIA terms and conditions of sale for the product.

THE NVIDIA PRODUCT DESCRIBED IN THIS GUIDE IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED OR INTENDED FOR USE IN CONNECTION WITH THE DESIGN, CONSTRUCTION, MAINTENANCE, AND/OR OPERATION OF ANY SYSTEM WHERE THE USE OR A FAILURE OF SUCH SYSTEM COULD RESULT IN A SITUATION THAT THREATENS THE SAFETY OF HUMAN LIFE OR SEVERE PHYSICAL HARM OR PROPERTY DAMAGE (INCLUDING, FOR EXAMPLE, USE IN CONNECTION WITH ANY NUCLEAR, AVIONICS, LIFE SUPPORT OR OTHER LIFE CRITICAL APPLICATION). NVIDIA EXPRESSLY DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY OF FITNESS FOR SUCH HIGH RISK USES. NVIDIA SHALL NOT BE LIABLE TO CUSTOMER OR ANY THIRD PARTY, IN WHOLE OR IN PART, FOR ANY CLAIMS OR DAMAGES ARISING FROM SUCH HIGH RISK USES.

NVIDIA makes no representation or warranty that the product described in this guide will be suitable for any specified use without further testing or modification. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to ensure the product is suitable and fit for the application planned by customer and to do the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this guide. NVIDIA does not accept any liability related to any default, damage, costs or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this guide, or (ii) customer product designs.

Other than the right for customer to use the information in this guide with the product, no other license, either expressed or implied, is hereby granted by NVIDIA under this guide. Reproduction of information in this guide is permissible only if reproduction is approved by NVIDIA in writing, is reproduced without alteration, and is accompanied by all associated conditions, limitations, and notices.

Trademarks

NVIDIA, the NVIDIA logo, and cuBLAS, CUDA, cuDNN, cuFFT, cuSPARSE, DIGITS, DGX, DGX-1, DGX Station, GRID, Jetson, Kepler, NVIDIA GPU Cloud, Maxwell, NCCL, NVLink, Pascal, Tegra, TensorRT, Tesla and Volta are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.