Requirements and Installation
This page describes the requirements and installation steps for the TLT CV Inference Pipeline.
The TLT CV Inference Pipeline has the following hardware requirements:
Minimum
4 GB system RAM
2.5 GB of GPU RAM
6 core CPU
1 NVIDIA GPU
Discrete GPU: NVIDIA Volta, Turing, or Ampere GPU architecture
12 GB of HDD/SSD space
720p Webcam
Recommended
32 GB system RAM
32 GB of GPU RAM
8 core CPU
1 NVIDIA GPU
Discrete GPU: Volta, Turing, Ampere architecture
16 GB of SSD space
Webcam
Logitech C920 Pro HD
Logitech C922
Logitech C310
The TLT CV Inference Pipeline has the following software requirements:
Ubuntu 18.04 LTS
NVIDIA GPU Cloud account and API key
NVIDIA GPU Cloud CLI Tool for AMD64 or ARM64 (must exist in ${PATH})
docker-ce with management as a non-root user using the Docker Post-Installation Steps for Linux
NVIDIA GPU driver v455.xx or above
Jetpack 4.5 for Jetson devices
Jetpack 4.5 comes preinstalled with docker and NVIDIA Docker runtime, but will still need the Docker Post-Installation Steps for Linux.
Perform the following prerequisite steps before installing the TLT CV Inference Pipeline:
Install Docker.
Install NVIDIA GPU driver.
Install nvidia docker
Get an NGC account and API key. For step-by-step instructions, see the NGC Getting Started Guide.
Download the NVIDIA GPU Cloud CLI Tool.
Execute
docker login nvcr.io
from the command line and enter these login credentials:Username: “$oauthtoken”
Password: “YOUR_NGC_API_KEY”
For Jetson devices, manually increase the Jetson Power mode and maximize performance further by using the Jetson Clocks mode. The following commands perform this:
sudo nvpmodel -m 0 sudo /usr/bin/jetson_clocks
The TLT CV Inference Pipeline containers and TLT models are available to download from the NGC. You must have an NGC account and an API key associated with your account. See the Installation Prerequisites section for details on creating an NGC account and obtaining an API key.
Configure the NGC API key
Using the NGC API Key obtained in Installation Prerequisites, configure the enclosed ngc cli by executing this command and following the prompts:
ngc config set
Download the TLT CV Inference Pipeline Quick Start
Now, we can download the TLT CV Inference Pipeline Quick Start using the following command:
ngc registry resource download-version "nvidia/tlt_cv_inference_pipeline_quick_start:v0.2-ga"
This will download the Quick Start Scripts, documentation for API Usage, EULA, and Third Party Licenses.
To get started, proceed to the TLT CV Inference Pipeline Quick Start Scripts.