Deep Learning Frameworks Documentation - Last updated November 27, 2018 - Send Feedback -

Deep Learning Frameworks


Preparing To Use NVIDIA Containers
This Preparing To Use NVIDIA Containers Getting Started Guide provides the first-step instructions for preparing to use NVIDIA containers on your DGX system. You must setup your DGX system before you can access the NVIDIA GPU Cloud (NGC) container registry to pull a container.
Container User Guide
This Containers for Deep Learning Frameworks User Guide provides a detailed overview about containers and step-by-step instructions for pulling and running a container, as well as customizing and extending containers.

Best Practices


Introduction
This Best Practices section provides recommendations to help administrators and users work with Docker, extend frameworks, and administer and manage DGX products. Learn more about what these best practices sections cover.
DGX Best Practices
This DGX Best Practices Guide provides recommendations to help administrators and users administer and manage DGX products, such as DGX-2, DGX-1, and DGX Station.
Docker And Container Best Practices
This Docker And Container Best Practices Guide provides recommendations to help administrators and users work with Docker. This guide also highlights the best practices to using Docker with NVIDIA containers.
Frameworks And Scripts Best Practices
This Frameworks And Scripts Best Practices Guide provides recommendations to help administrators and users extend frameworks. This guide does not explain how to use the frameworks for addressing your projects, rather, it briefly presents a few best practices for starting them.

Support Matrix


Frameworks Support Matrix
This support matrix is for NVIDIA optimized frameworks. The matrix provides a single view into the supported software and specific versions that come packaged with the frameworks based on the container image.

Optimized Frameworks Release Notes


MXNet Release Notes
This document describes the key features, software enhancements and improvements, known issues, and how to run this container for the 18.11 and earlier releases. The MXNet framework delivers high convolutional neural network performance and multi-GPU training, provides automatic differentiation, and optimized predefined layers. It’s a useful framework for those who need their model inference to “run anywhere”; for example, a data scientist can train a model on a DGX-1 with Volta by writing a model in Python, while a data engineer can deploy the trained model using a Scala API tied to the company’s existing infrastructure. The MXNet container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized.
NVCaffe Release Notes
This document describes the key features, software enhancements and improvements, known issues, and how to run this container for the 18.11 and earlier releases. The NVCaffe framework can be used for image recognition, specifically used for creating, training, analyzing, and deploying deep neural networks. NVCaffe is based on the Caffe Deep Learning Framework by BVLC. The NVCaffe container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized.
PyTorch Release Notes
This document describes the key features, software enhancements and improvements, known issues, and how to run this container for the 18.11 and earlier releases. The PyTorch framework enables you to develop deep learning models with flexibility. With the PyTorch framework, you can make full use of Python packages, such as, SciPy, NumPy, etc. The PyTorch framework is known to be convenient and flexible, with examples covering reinforcement learning, image classification, and machine translation as the more common use cases. The PyTorch container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized.
TensorFlow Release Notes
This document describes the key features, software enhancements and improvements, known issues, and how to run this container for the 18.11 and earlier releases. The TensorFlow framework can be used for education, research, and for product usage within your products; specifically, speech, voice, and sound recognition, information retrieval, and image recognition and classification. Furthermore, the TensorFlow framework can also be used for text-based applications, such as detection of fraud and threats, analyzing time series data to extract statistics, and video detection, such as motion and real time threat detection in gaming, security, etc. The TensorFlow container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized.

Optimized Frameworks User Guides


NVCaffe User Guide
The NVCaffe User Guide provides a detailed overview and describes how to use and customize the NVCaffe deep learning framework. This guide also provides documentation on the NVCaffe parameters that you can use to help implement the optimizations of the container into your environment.
TensorFlow User Guide
The TensorFlow User Guide provides a detailed overview and look into using and customizing the TensorFlow deep learning framework. This guide also provides documentation on the NVIDIA TensorFlow parameters that you can use to help implement the optimizations of the container into your environment.

Installing Frameworks For Jetson


TensorFlow For Jetson TX2
This guide provides instructions on how to install TensorFlow for Jetson TX2.
Release Notes For Jetson TX2
This is the first release of the documentation for installing TensorFlow for Jetson TX2. This document describes the key features, software enhancements and improvements, and known issues regarding the installation of TensorFlow for Jetson TX2.
TensorFlow For Jetson AGX Xavier
This guide provides instructions on how to install TensorFlow for Jetson AGX Xavier.
Release Notes For Jetson AGX Xavier
This is the first release of the documentation for installing TensorFlow for Jetson AGX Xavier. This document describes the key features, software enhancements and improvements, and known issues when installing TensorFlow for Jetson AGX Xavier.

Accelerating Inference In Frameworks With TensorRT


Accelerating Inference In TensorFlow With TensorRT User Guide
This guide provides instructions on how to accelerate inference in TensorFlow with TensorRT (TF-TRT).
Accelerating Inference In TensorFlow With TensorRT Release Notes
This is the first release of the documentation for Accelerating Inference In TensorFlow With TensorRT (TF-TRT). This document describes the key features, software enhancements and improvements, and known issues when integrating TensorRT.

Archived Optimized Frameworks Release Notes


Caffe2 Release Notes
This document describes the key features, software enhancements and improvements, known issues, and how to run this container for the 18.08 and earlier releases. The Caffe2 framework is used primarily for detection, segmentation, and translation tasks in production for Facebook applications. The Caffe2 framework focuses on cross-platform deployment and performance. The Caffe2 and PyTorch frameworks have a lot of parallel features to them, which resulted in merging the two frameworks into a single package. However, for now, the Caffe2 container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized.
Microsoft Cognitive Toolkit Release Notes
This document describes the key features, software enhancements and improvements, known issues, and how to run this container for the 18.08 and earlier releases. The Microsoft Cognitive Toolkit, previously known as CNTK, framework can be used for applications such as large datasets, object detection and recognition, speech, text, vision and any combination of them. The Microsoft Cognitive Toolkit also supports inference use cases from C++, Python, C#/.NET and Java, which enables you to deploy service on Linux, Windows, Universal Windows Platform (UMP) and Azure. The Microsoft Cognitive Toolkit container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized.
Theano Release Notes
This document describes the key features, software enhancements and improvements, known issues, and how to run this container for the 18.08 and earlier releases. The Theano framework enables you to define, analyze, and optimize mathematical equations using the Python library. Developers use Theano for manipulate and analyze expressions, including matrix-valued expressions. Theano focuses on recognizing numerically expressions that are unstable, building symbolic graphs automatically, and compiling parts of your numeric expression into CPU or GPU instructions. The Theano container is currently released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized, however, we will be discontinuing container updates once the next major CUDA version is released.
Torch Release Notes
This document describes the key features, software enhancements and improvements, known issues, and how to run this container for the 18.08 and earlier releases. The Torch framework is a scripting language that is based on a programming language called Lua. It is highly advised that you are familiar with Lua before using Torch. Torch provides numerous algorithms for deep learning networks mostly used by researchers. The Torch framework focuses on speeding up the time it takes to build a scientific algorithm. The Torch container is currently eleased monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream; which are all tested, tuned, and optimized, however, we will be discontinuing container updates once the next major CUDA version is released.