Requirements and Installation

This page describes the requirements and installation steps for Transfer Learning Toolkit (TLT).

TLT has the following hardware requirements:


  • 4 GB system RAM

  • 4 GB of GPU RAM

  • Single core CPU


  • 50 GB of HDD space

  • 32 GB system RAM

  • 32 GB of GPU RAM

  • 8 core CPU

  • 1 NVIDIA V100 GPU

  • 100 GB of SSD space


Currently TLT is not supported on GA-100 GPU’s.

TLT has the following software requirements:


DeepStream 5.0 - NVIDIA SDK for IVA inference is recommended.

Perform the following prerequisite steps before installing TLT:

  1. Install Docker.

  2. Install NVIDIA GPU driver v410.xx or above.

  3. Install nvidia docker

  4. Get an NGC account and API key:

    1. Go to NGC and click the Transfer Learning Toolkit container in the Catalog tab. This message is displayed: “Sign in to access the PULL feature of this repository”.

    2. Enter your Email address and click Next, or click Create an Account.

    3. Choose your organization when prompted for Organization/Team.

    4. Click Sign In.

    5. Select the Containers tab on the left navigation pane and click the Transfer Learning Toolkit tile.

  5. Download the Docker container.

  6. Execute docker login from the command line and enter these login credentials:

    1. Username: “$oauthtoken”

    2. Password: “YOUR_NGC_API_KEY”

  7. Execute docker pull<version>.

The Transfer Learning Toolkit (TLT) is available to download from the NGC. You must have an NGC account and an API key associated with your account. See the Installation Prerequisites section for details on creating an NGC account and obtaining an API key.

Running the Transfer Learning Toolkit

Use this procedure to run the Transfer Learning Toolkit.

Run the toolkit

Run the toolkit using the following command. Docker starts in the /workplace folder by default.


docker run --runtime=nvidia -it<version> /bin/bash

Access local directories

To access local directories from inside Docker, you need to mount them in Docker. Use the option -v <source_dir>:<mount_dir> to mount local directories in docker. For example, the command to run the toolkit mounting /home/<username>/tlt-experiments directory in your disk to /workspace/tlt-experiments in Docker would be as follows:


docker run --runtime=nvidia -it -v /home/<username>/tlt-experiments:/workspace/tlt-experiments<version> /bin/bash

It is useful to mount separate volumes for the dataset and the experiment results so that they persist outside of Docker. In this way, the data is preserved after Docker is closed. Any data that is generated to, or referred from a directory inside the docker, will be lost if it is not either copied out of the docker, or written to or read from volumes outside of the docker.

Use the examples

Examples of DetectNet_v2, SSD, DSSD, RetinaNet, YOLOv3 and FasterRCNN with ResNet18 backbone for detecting objects that are available as Jupyter Notebooks. To run the examples that are available, enable the jupyter notebook included in Docker to run in your browser:


docker run --runtime=nvidia -it -v /home/<username>/tlt-experiments:/workspace/tlt-experiments -p 8888:8888<version>

Go to the examples folder: cd examples/. Execute this command from inside Docker to start the jupyter notebook:


jupyter notebook --ip --allow-root

Copy and paste the link produced from this command into your browser to access the notebook. The /workspace/examples folder will contain a demo notebook.


For all the detector notebooks, the tlt-train tool does not support training on images of multiple resolutions, or resizing images during training. All of the images must be resized offline to the final training size and the corresponding bounding boxes must be scaled accordingly.

Downloading the Models

The Transfer Learning Toolkit Docker gives you access to a repository of pretrained models that can serve as a starting point when training deep neural networks. These models are hosted on the NGC. The TLT docker interfaces with NGC via the NGC Catalog CLI. More information about the NGC Catalog CLI is available here. Please follow the instructions given here to configure the NGC CLI and download the models.

Configure the NGC API key

Using the NGC API Key obtained in Installation Prerequisites, configure the enclosed ngc cli by executing this command and following the prompts:


ngc config set

Get a list of models

Use this command to get a list of models that are hosted in the NGC model registry:


ngc registry model list <model_glob_string>

Here is an example of using this command:


ngc registry model list nvidia/tlt_pretrained_*


All our classification models have names based on this template: nvidia/tlt_pretrained_classification:<template>.

Download a model

Use this command to download the model you have chosen from the NGC model registry:


ngc registry model download-version <ORG/model_name:version> -dest <path_to_download_dir>

For example, use this command to download the resnet 18 classification model to the $USER_EXPERIMENT_DIR directory:


ngc registry model download-version nvidia/tlt_pretrained_classification:resnet18 --dest $USER_EXPERIMENT_DIR/pretrained_resnet18


Downloaded 82.41 MB in 9s, Download speed: 9.14 MB/s ---------------------------------------------------- Transfer id: tlt_iva_classification_resnet18_v1 Download status: Completed. Downloaded local path: /workspace/tlt-experiments/pretrained_resnet18/ tlt_resnet18_classification_v1 Total files downloaded: 2 Total downloaded size: 82.41 MB Started at: 2019-07-16 01:29:53.028400 Completed at: 2019-07-16 01:30:02.053016 Duration taken: 9s seconds

© Copyright 2020, NVIDIA. Last updated on Nov 18, 2020.