Requirements and Installation ============================= .. _tlt_cv_inference_pipeline_requirements_and_installation: This page describes the requirements and installation steps for the TLT CV Inference Pipeline. Hardware Requirements --------------------- The TLT CV Inference Pipeline has the following hardware requirements: Minimum ^^^^^^^ * 4 GB system RAM * 2.5 GB of GPU RAM * 6 core CPU * 1 NVIDIA GPU * Discrete GPU: NVIDIA Volta, Turing, or Ampere GPU architecture * `Jetson AGX Xavier`_, `Jetson Xavier NX`_, or `Jetson TX2`_ .. _Jetson AGX Xavier: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/ .. _Jetson Xavier NX: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-xavier-nx/ .. _Jetson TX2: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/ * 12 GB of HDD/SSD space * 720p Webcam Recommended ^^^^^^^^^^^ * 32 GB system RAM * 32 GB of GPU RAM * 8 core CPU * 1 NVIDIA GPU * Discrete GPU: Volta, Turing, Ampere architecture * `Jetson AGX Xavier`_ or `Jetson Xavier NX`_ * 16 GB of SSD space * Webcam * Logitech C920 Pro HD * Logitech C922 * Logitech C310 Software Requirements --------------------- The TLT CV Inference Pipeline has the following software requirements: * Ubuntu 18.04 LTS * `NVIDIA GPU Cloud`_ account and API key * `NVIDIA GPU Cloud CLI Tool`_ for AMD64 or ARM64 (must exist in **${PATH}**) * `docker-ce`_ with management as a non-root user using the `Docker Post-Installation Steps for Linux`_ * `nvidia docker`_ * NVIDIA GPU driver v455.xx or above * Jetpack 4.5 for Jetson devices .. _NVIDIA GPU Cloud: https://ngc.nvidia.com/ .. _NVIDIA GPU Cloud CLI Tool: https://ngc.nvidia.com/setup/installers/cli .. _Docker Post-Installation Steps for Linux: https://docs.docker.com/engine/install/linux-postinstall/ .. _docker-ce: https://docs.docker.com/install/linux/docker-ce/ubuntu/ .. _nvidia docker: https://github.com/NVIDIA/nvidia-docker .. Note:: Jetpack 4.5 comes preinstalled with docker and NVIDIA Docker runtime, but will still need the `Docker Post-Installation Steps for Linux`_. .. _tlt_cv_inference_pipeline_install_prereq: Installation Prerequisites -------------------------- Perform the following prerequisite steps before installing the TLT CV Inference Pipeline: 1. Install `Docker`_. 2. Install `NVIDIA GPU driver`_. 3. Install `nvidia docker`_ 4. Get an `NGC`_ account and API key. For step-by-step instructions, see the `NGC Getting Started Guide`_. 5. Download the `NVIDIA GPU Cloud CLI Tool`_. 6. Execute :code:`docker login nvcr.io` from the command line and enter these login credentials: a. Username: "$oauthtoken" b. Password: "YOUR_NGC_API_KEY" 7. For Jetson devices, manually increase the `Jetson Power mode`_ and maximize performance further by using the `Jetson Clocks mode`_. The following commands perform this: .. code-block:: bash sudo nvpmodel -m 0 sudo /usr/bin/jetson_clocks .. _Docker: https://www.docker.com/ .. _NVIDIA GPU driver: https://www.nvidia.com/Download/index.aspx?lang=en-us .. _NGC: https://ngc.nvidia.com/ .. _NGC Getting Started Guide: http://docs.nvidia.com/ngc/ngc-getting-started-guide/index.html .. _Jetson Power mode: https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fpower_management_jetson_xavier.html%23wwpID0E0NO0HA .. _Jetson Clocks mode: https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fpower_management_jetson_xavier.html%23wwpID0E0SB0HA Installation ------------ The TLT CV Inference Pipeline containers and TLT models are available to download from the NGC. You must have an NGC account and an API key associated with your account. See the :ref:`Installation Prerequisites` section for details on creating an NGC account and obtaining an API key. Configure the NGC API key ^^^^^^^^^^^^^^^^^^^^^^^^^ Using the NGC API Key obtained in :ref:`Installation Prerequisites`, configure the enclosed ngc cli by executing this command and following the prompts: .. code-block:: bash ngc config set Download the TLT CV Inference Pipeline Quick Start ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. _TLT CV Inference Pipeline Quick Start: https://ngc.nvidia.com/resources/nvidia:tlt_cv_inference_pipeline_quick_start Now, we can download the `TLT CV Inference Pipeline Quick Start`_ using the following command: .. code-block:: bash ngc registry resource download-version "nvidia/tlt_cv_inference_pipeline_quick_start:v0.2-ga" This will download the Quick Start Scripts, documentation for API Usage, EULA, and Third Party Licenses. To get started, proceed to the :ref:`TLT CV Inference Pipeline Quick Start Scripts`.