Running TAO Toolkit on an Azure VM

Microsoft Azure Cloud offers several GPU optimized Virtual machines (VM) with access to NVIDIA A100, V100 and T4 GPUs.

Setting up an Azure VM

  1. Azure provides several VMs that are powered by NVIDIA GPUs–including the ND 100, NCv3, and NC T4v3 series. We recommand using the NVIDIA prodivded GPU optimized image as the base image for the VM. This base image includes all the lower-level dependencies, which reduces the friction of installing drivers and other pre-requisites.

    Pull the GPU optimized image from Azure marketplace by clicking on the Get it Now button.

    ../../_images/gpu_optimized_image.png

    Select the v21.04.1 version under the Software plan to select the latest version. This will have the latest NVIDIA drivers and CUDA toolkit. Once you select the version, it will direct you to the Azure portal where you will create your VM.

    ../../_images/azure_image_version_selection_window.png
  2. Configure your VM.

    1. In the Azure portal, click Create to start configuring the VM.

      ../../_images/azure_portal.png

      This will pull the following page, where you can select your subscription method, resource group, region, and Hardware configuration.

    2. Provide a name for your VM. Then click the Review + Create button at the end to do a final review.

      Note

      The default disk space is 32GB. We recommend using > 128GB disk for this experiment.

      ../../_images/azure_create_vm.png
    3. Make a final review of the offering that you are creating. Then click the Create button to spin up your VM in Azure.

      Note

      Once you create the VM, you will start incurring cost, so review the pricing details.

      ../../_images/azure_vm_review.png
  3. Log in to your VM: Once you have created your VM, SSH into your VM using the username and domain name or IP address of your VM.

    ssh <username>@<ip_address>
    

Installing the Pre-Requisites for TAO Toolkit in the VM

  1. Configure user permissions in the VM:

    sudo su - root
    usermod -a -G docker azureuser
    
  2. Install the pre-requisite apt packages:

    apt-get -y install python3-pip unzip
    
  3. Install the virtualenv wrapper:

    pip3 install virtualenvwrapper
    
  4. Configure the virtualenv wrapper:

    export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
    source /usr/local/bin/virtualenvwrapper.sh
    
  5. Create a virtualenv for the launcher using the following command:

    mkvirtualenv -p /usr/bin/python3 launcher
    

    Note

    You only need to create a virtualenv once in the instance. When you restart the instance, simply run the commands in step 3 and invoke the same virtualenv using the command below:

    workon launcher
    
  6. Install jupyterlab in the virtualenv using the command below:

    pip3 install jupyterlab
    
  7. Log in to the NGC docker registry named nvcr.io:

    docker login nvcr.io
    

    The username is $oauthtoken and the password is the NGC API KEY. You may set this API key from the NGC website.

  8. Install the TAO Toolkit launcher package:

    pip3 install nvidia-pyindex
    pip3 install nvidia-tao
    
  9. Verify the launcher installation using the tao info --verbose command.

    Configuration of the TAO Toolkit Instance
    
    dockers:
            nvidia/tao/tao-toolkit-tf:
                    docker_registry: nvcr.io
                    docker_tag: v3.21.08-py3
                    tasks:
                            1. augment
                            2. bpnet
                            3. classification
                            4. detectnet_v2
                            5. dssd
                            6. emotionnet
                            7. faster_rcnn
                            8. fpenet
                            9. gazenet
                            10. gesturenet
                            11. heartratenet
                            12. lprnet
                            13. mask_rcnn
                            14. multitask_classification
                            15. retinanet
                            16. ssd
                            17. unet
                            18. yolo_v3
                            19. yolo_v4
                            20. yolo_v4_tiny
                            21. converter
            nvidia/tao/tao-toolkit-pyt:
                    docker_registry: nvcr.io
                    docker_tag: v3.21.08-py3
                    tasks:
                            1. speech_to_text
                            2. speech_to_text_citrinet
                            3. text_classification
                            4. question_answering
                            5. token_classification
                            6. intent_slot_classification
                            7. punctuation_and_capitalization
            nvidia/tao/tao-toolkit-lm:
                    docker_registry: nvcr.io
                    docker_tag: v3.21.08-py3
                    tasks:
                            1. n_gram
    format_version: 1.0
    toolkit_version: 3.21.08
    published_date: 08/17/2021
    

Downloading and Running the Test Samples

Now that you have created a virtualenv and installed all the dependencies, you are now ready to download and run the TAO samples on the notebook. The instructions below assume that you are running the TAO Computer Vision samples. For more Conversational AI samples, refer to the sample notebooks in this section.

  1. Download and unzip the notebooks from NGC using the commands below:

    wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tao/cv_samples/versions/v1.2.0/zip -O    cv_samples_v1.2.0.zip
    unzip -u cv_samples_v1.2.0.zip  -d ./cv_samples_v1.2.0 && cd ./cv_samples_v1.2.0
    
  2. Launch the jupyter notebook using the command below:

    jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root --NotebookApp.token=<notebook_token>
    

    This will kick off the jupyter notebook server in the VM. To access this server, navigate to http://<dns_name>:8888/ and, when prompted, enter the <notebook_token> used to start the notebook server. The dns_name here is the public IPv4 DNS of the VM that you will see under the EC2 dashboard of your respective instance.