NGC on Azure Virtual Machines

This NGC on Azure Virtual Machnies Guide explains how to set up an NVIDIA GPU Cloud Machine Image on the Microsoft Azure platform and includes release notes for each version of the NVIDIA virtual machine image.

1. Using NGC on Azure Virtual Machines

NVIDIA makes available on Microsoft Azure three different virtual machine images (VMIs). These are GPU-optimized VMIs for Azure VM instances with NVIDIA A100, V100 or P100 GPUs.

  • NVIDIA GPU-Optimized Image for Deep Learning, Machine Learning & HPC
  • The base GPU-Optimized image Includes Ubuntu Server, the NVIDIA driver, Docker CE, and the NVIDIA Container Runtime for Docker
  • NVIDIA GPU-Optimized Image for TensorFlow
  • The base image with NVIDIA’s GPU-Accelerated TensorFlow container pre-installed
  • NVIDIA GPU-Optimized Image for PyTorch
  • The base image with NVIDIA’s GPU-Accelerated PyTorch container pre-installed
  • NVIDIA HPC SDK Image
  • The base image with NVIDIA’s HPC SDK pre-installed

For those familiar with the Azure platform, the process of launching the instance is as simple as logging into Azure, selecting the NVIDIA GPU-optimized Image of choice, configuring settings as needed, then launching the VM. After launching the VM, you can SSH into it and start building a host of AI applications in deep learning, machine learning and data science leveraging the plethora of GPU-accelerated containers, pre-trained models and resources from NGC.

This document provides step-by-step instructions for accomplishing this, including how to use the Azure CLI.

1.1. Prerequisites

These instructions assume the following:

  • You have an Azure account - https://portal.azure.com, with either permissions to create a Resource Group or with a Resource Group already available to you.

  • Browse the NGC website and identified an available NGC container and tag to run on the VirtualMachine Instance (VMI).
  • If you plan to use the Azure CLI or Terraform, then the Azure CLI 2.0 must be installed.
  • Windows Users: The CLI code snippets are for bash on Linux or Mac OS X. If you are using Windows and want to use the snippets as-is, you can use the Windows Subsystem for Linux and use the bash shell (you will be in Ubuntu Linux).

1.2. Before You Start

Be sure you are familiar with the information in this chapter before starting to use the NVIDIA GPU Cloud Machine Image on Microsoft Azure.

1.2.1. Setting Up SSH Keys

If you do not already have SSH keys set up specifically for Azure, you will need to set one up and have it on the machine you will use to SSH to the VM. In the examples, the key is named "azure-key".

On Linux or OS X, generate a new key with the following command:

ssh-keygen -t rsa -b 2048 -f ~/.ssh/azure-key

On Windows, the location will depend on the SSH client you use, so modify the path above in the snippets or in your SSH client configuration.

Alternatively, you could also choose to authenticate using a username and password that can be setup while creating the VM. However, the SSH key method ensures optimal security.

https://docs.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys

1.2.2. Setting Up a Security Group

When creating your NVIDIA GPU Cloud VM, Azure sets up a network security group for the VM and you should choose to allow external access to inbound ports 22 (for SSH) and 443 (for HTTPS). You can add inbound rules to the network security group later for other ports as needed, such as port 8888 for DIGITS.

You can also set up a separate network security group so that it will be available any time you create a new NVIDIA GPU Cloud VM. This can be done ahead of time. Refer to the Microsoft instructions to Create, Change, or Delete a Network Security Group

Add the following inbound rules to your network security group:
  • SSH
    • Destination port ranges: 22
    • Protocol: TCP
    • Name: SSH
  • HTTPS
    • Destination port ranges: 443
    • Protocol: TCP
    • Name: HTTPS
  • Others as needed

    Example: DIGITS

    • Destination port ranges: 8888
    • Protocol: TCP
    • Name: DIGITS

1.3. Creating an NGC Certified Virtual Machine using the Azure Console

1.3.1. Log in and Launch the VM

  1. Log into the Azure portal (https://portal.azure.com).
  2. Select Create a Resource from the Azure Services menu.

  3. On the New pane, search for "nvidia", and then select the NVIDIA GPU-Optimized image of your choice from the list.

  4. Select the latest release version (or version of your choice if required) from the software plan menu and then click Create

  5. Complete the settings under the Basics tab as follows:
    • Subscription and Resource Group: Choose relevant options to your subscription
    • Virtual Machine Name: Name of choice
    • Region: Select a region with instance types featuring the latest NVIDIA GPUs (NC-v3 Series). In this example we use the (US) East US region. A list of available instance types by region can be found here
    • Authentication Choice: SSH, with username of choice
    • SSH public key: Paste in your SSH public key that you previously generated
  6. Click Next to select a Premium SSH and add data disks.
  7. In the Networking section, select the Network Security Group you created earlier under the Configure network security group option.
  8. Make other Settings selections as needed, then click OK.

    After the validation passes, the portal presents the details of your new image which you can download as a template to automate deployment later.

  9. Click Deploy to deploy the image. The deployment starts, as indicated by the traveling bar underneath the Alert icon. It may take a few minutes to complete.

1.3.2. Connect to Your VM Instance

  1. Open the VM instance that you created.
    1. Navigate to the Azure portal home page and click on Virtual Machines under the Azure services menu.
    2. Select the VM you created and want to connect to.
  2. Click Connect from the action bar at the top and then select SSH.

    If the instructions to log in via SSH login do not work, refer to Troubleshooting SSH connections to an Azure Linux VM that fails, errors out, or is refused documentation for further troubleshooting.

Start/Stop Your VM Instance

  1. Open the VM instance you created.
    1. Navigate to the Azure portal home page and click on Virtual Machines under the Azure services menu.
    2. Select the VM you created and want to manage.
  2. Click Start or Stop from the action bar at the top.

1.3.4. Delete VM and Associated Resources

When you created your VM, other resources for that instance were automatically created for you, such as a network interface, public IP address, and boot disk. If you deleted your VM, you will also need to delete these resources.
  1. Open the VM instance you created.
    1. Navigate to the Azure portal home page and click on Virtual Machines under the Azure services menu.
    2. Select the VM you created and want to delete.
  2. Click Delete from the action bar at the top and confirm your choice by typing ‘yes’ on the pane to the left that pops up.

1.4. Launching an NVIDIA GPU Cloud VM Using Azure CLI

If you plan to use Azure CLI, then the CLI must be installed.

Some of the CLI snippets in these instructions make use of jq, which should be installed on the machine from which you'll run the CLI. You may paste these snippets into your own bash scripts or type them at the command line.

1.4.1. Set Up Environment Variables

Use the following table as a guide for determining the values you will need for creating your GPU Cloud VM. The variable names are arbitrary and used in the instructions that follow.

VARIABLE DESCRIPTION EXAMPLE
AZ_VM_NAME Name for your GPU Cloud VM my-nvgpu-vmi
AZ_RESOURCE_GROUP Your resource group ACME_RG
AZ_IMAGE

The NVIDIA GPU-Optimized Image. See the release notes Release Notes for NVIDIA Virtual Machine Images on Azure for the latest release.

NVIDIA-GPU-Cloud-Image
AZ_LOCATION

A zone that contains GPUs. Refer to https://azure.microsoft.com/en-us/global-infrastructure/services/ to see available locations for NCv2 and NCv3 series SKUs.

westus2

AZ_SIZE The SKU specified by the number of vCPUs, RAM, and GPUs. Refer to https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-gpu for the list of P40, P100, and V100 SKUs to choose from. NC6s_v2
AZ_SSH_KEY <path>/<public-azure-key.pub> ~/.ssh/azure-key.pub
AZ_USER

Your username

jsmith
AZ_NSG Your network security group

my-nvgpu-nsg

1.4.2. Launch Your VM Instance

Be sure you have installed Azure CLI and that you are ready with the VM setup information listed in the section Set Up Environment Variables. You can then either manually replace the variable names in the commands in this section with the actual values or define the variables ahead of time.

  1. Log in to the Azure CLI.
    az login
  2. Enter the following:
    az vm create \
     --name ${AZ_VM_NAME} \
     --resource-group ${AZ_RESOURCE_GROUP} \
     --image ${AZ_IMAGE} \ --location ${AZ_LOCATION} \
     --size ${AZ_SIZE} \ --ssh-key-value ${AZ_SSH_KEY} \
     --admin-username ${AZ_USER} \
     --nsg ${AZ_NSG}
    
    If successful, you should see output consisting of a JSON description of your VM. The GPU Cloud VM gets deployed. Note the public IP address for use when establishing an SSH connection to the VM. You can also set up an AZ_PUBLIC_IP variable by defining an Azure JSON file for the VM as follows:
    AZ_JSON=$(az vm create \
     --name ${AZURE_VM_NAME} \
     --resource-group ${AZ_RESOURCE_GROUP} \
     --image ${AZ_IMAGE} \ --location ${AZ_LOCATION} \
     --size ${AZ_SIZE} \ --ssh-key-value ${AZ_SSH_KEY} \
     --admin-username ${AZ_USER} \
     --nsg ${AZ_NSG})
    AZ_PUBLIC_IP=$(echo $AZ_JSON | jq .publicIpAddress | sed 's/\"//g') && \
     echo $AZ_JSON && echo AZ_PUBLIC_IP=$AZ_PUBLIC_IP 
Azure sets up a non-persistent scratch disk for each VM. See the sections Persistent Data Storage for Azure Virtual Machines for instructions on setting up alternate storage for your datasets.

1.4.3. Connect to Your VM Instance

Using a CLI on Mac or Linux (Windows users: use OpenSSH on Windows PowerShell or use the Windows Subsystem for Linux), run ssh to connect to your GPU VM instance.

ssh -i $AZ_SSH_KEY $AZ_USER@$AZ_PUBLIC_IP

Start/Stop Your VM Instance

VMs can be stopped and started again without losing any of their storage and other resources.

To stop and deallocate a running VM:

az vm deallocate --resource-group $AZ_RESOURCE_GROUP --name $AZ_VM_NAME

To start a stopped VM:

az vm start --resource-group $AZ_RESOURCE_GROUP --name $AZ_VM_NAME

When starting a stopped VM, you will need to update the public IP variable, as it will change with the newly started VM.

AZ_PUBLIC_IP=$(az network public-ip show \
  --resource-group $AZ_RESOURCE_GROUP \
  --name $AZ_VM_NAME\PublicIP | jq .ipAddress | sed 's/\"//g') && \
  echo AZ_PUBLIC_IP=$AZ_PUBLIC_IP

1.4.5. Delete VM and Associated Resources

When you created your VM, other resources for that instance were automatically created for you, such as a network interface, public IP address, and boot disk. If you deleted your instance, you will also need to delete these resources.

Perform the deletions in the following order.

  1. Delete your VM.
    az vm delete -g $AZ_RESOURCE_GROUP -n $AZ_VM_NAME
  2. Delete the VM OS disk.
    1. List the disks in your Resource Group.
      az disk list -g $AZ_RESOURCE_GROUP
      The associated OS disk will have the name of your VM as the base name.
    2. Delete the OS disk.
      az disk delete -g $AZ_RESOURCE_GROUP -n MyDisk 
    See https://docs.microsoft.com/en-us/cli/azure/disk?view=azure-cli-latest#az-disk-delete for more information.
  3. Delete the VM network interface.
    1. List the network interface resources in your Resource Group.
      az network nic list -g $AZ_RESOURCE_GROUP
      The associated network interface will have the name of your VM as the base name.
    2. Delete the network interface resource.
      az network nic delete -g $AZ_RESOURCE_GROUP -n MyNic
    See https://docs.microsoft.com/en-us/cli/azure/network/nic?view=azure-cli-latest#az-network-nic-delete for more information.
  4. Delete the VM public IP address.
    1. List the public IPs in your Resource Group.
      az network public-ip list -g $AZ_RESOURCE_GROUP
      The associated public IP will have the name of your VM as the base name.
    2. Delete the public IP.
      az network public-ip delete -g $AZ_RESOURCE_GROUP -n MyIp 
    See https://docs.microsoft.com/en-us/cli/azure/network/public-ip?view=azure-cli-latest#az-network-public-ip-delete for more information.

1.5. Using Premium Storage SSDs for Datasets

You can create Premium Storage SSD from the Azure dashboard. Premium Storage SSDs are ideal for persistent storage of a large number of datasets and offer better performance.

1.5.1. Create a Data Disk Using the Azure Console

  1. Open the VM instance that you created.
    1. Navigate to the Azure portal home page and click on Virtual Machines under the Azure services menu.
    2. Select the VM that you created and want to manage.
  2. Select Disks under the Settings category in the control panel on the left.

  3. Click Add Disk and click Create Disk in the drop down menu upon clicking Name.
  4. On the Create Managed Disk pane, Enter a disk name Select a resource group Select Premium SSD for Account type Enter a disk size
  5. Click Create.
  6. When the validation is completed, click Save.

1.5.2. Creating a Data Disk Using the Azure CLI

To create a new data disk and attach it to your VM, include the following option in the az vm create command.

 --data-disk-sizes-gb <data-disk-size> 

To attach an existing data disk to your VM when creating it, include the following option in the az vm create command.

 -- attach-data-disks <data-disk-name> 

1.5.2.1. Mounting a Data Disk

  1. Once the data disk is created, establish an SSH connection to your VM.
  2. Create a filesystem on the data disk.

    You can view the volume by running lsblk command.

    :~# lsblk 
    
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 
    sdb 8:16 0 1.5T 0 disk
    └─sdb1 8:17 0 1.4T 0 part /mnt 
    sr0 11:0 1 628K 0 rom 
    sdc 8:32 0 2T 0 disk
    └─sdc1 8:33 0 2T 0 part 
    sda 8:0 0 240G 0 disk
    └─sda1 8:1 0 240G 0 part /
    
    :`# mkfs.ext4 /dev/sdc1 
  3. Mount the volume to a mount directory.
    ~# mount /dev/sdc1 /data

    To mount the volume automatically every time the VM is stopped and restarted, add an entry to /etc/fstab.

    When adding an entry to /etc/fstab, use a UUID based device path (See device-names-problem for details).

    For example:.

    UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /data ext4 defaults,nofail 1 2 

1.5.3. Deleting a Data Disk

You can delete a Data Disk only if it not attached to a VM. Be aware that once you delete a Data Disk, you cannot undo the action.

  1. Open the Azure Dashboard and click All resources from the left side menu.
  2. Filter by Disks type, then locate and select the check box for your data disk.
  3. Click Delete.
  4. Enter ‘yes’ to confirm, then click Delete.

2. Release Notes for NVIDIA Virtual Machine Images on Azure

NVIDIA makes available on the Microsoft Azure platform a customized machine image based on the NVIDIA® Tesla Volta™ and Pascal™ GPUs. Running NVIDIA GPU Cloud containers on this instance provides optimum performance for deep learning, machine learning, and HPC workloads.

See the Using NGC with Azure Setup Guide for instructions on setting up and using the VMI.

2.1. Version 20.03.1

Image Name

  • NGC Image: NVIDIA-GPU-Cloud-Image-2020.03.24
  • TensorFlow from NVIDIA Image: NVIDIA-GPU-Cloud-Image-tensorflow-2020.03.24
  • PyTorch from NVIDIA Image: NVIDIA-GPU-Cloud-Image-pytorch-2020.03.24

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 18.04 LTS
  • NVIDIA Driver:  440.64.01
  • Docker Engine:   19.03.6
  • NVIDIA Container Toolkit v1.0.5-1

    Includes new command to run containers: docker run --gpus all <container>

  • TensorFlow container (TensorFlow from NVIDIA image): nvcr.io/nvidia/pytorch:20.02-tf1-py3, nvcr.io/nvidia/pytorch:20.02-tf2-py3
  • PyTorch container (PyTorch from NVIDIA image): nvcr.io/nvidia/tensorflow:20.02-py3

Key Changes

  • Updated Docker Engine to 19.03.6
  • Updated NVIDIA Driver to 440.64.01

Known Issues

Installing GPU drivers on the VM via a CUDA Install Succeeds Erroneously

Issue

Attempting to install CUDA on the VM will succeed, resulting in a potential conflict with the NVIDIA GPU driver included in the VM image.

Explanation

The configuration file to prevent driver installs is not working. This will be resolved in a later release of the VM image.

2.2. Version 19.11.3

Image Name

  • NGC Image: NVIDIA-GPU-Cloud-Image-2019.11.14
  • TensorFlow from NVIDIA Image: NVIDIA-GPU-Cloud-Image-tensorflow-2019.11.14
  • PyTorch from NVIDIA Image: NVIDIA-GPU-Cloud-Image-pytorch-2019.11.14

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 18.04 LTS
  • NVIDIA Driver:  440.33.01
  • Docker CE:   19.03.4-ce
  • NVIDIA Container Toolkit v1.0.5-1

    Includes new command to run containers: docker run --gpus all <container>

  • TensorFlow container (TensorFlow from NVIDIA image): nvcr.io/nvidia/tensorflow:19.10-py3
  • PyTorch container (PyTorch from NVIDIA image): nvcr.io/nvidia/pytorch:19.10-py3

Key Changes

  • Updated Docker-CE to 19.03.4
  • Updated NVIDIA Driver to 440.33.01

Known Issues

Installing GPU drivers on the VM via a CUDA Install Succeeds Erroneously

Issue

Attempting to install CUDA on the VM will succeed, resulting in a potential conflict with the NVIDIA GPU driver included in the VM image.

Explanation

The configuration file to prevent driver installs is not working. This will be resolved in a later release of the VM image.

2.3. Version 19.10.2

Image Name

  • NGC Image: NVIDIA-GPU-Cloud-Image-2019.10.02
  • TensorFlow from NVIDIA Image: NVIDIA-GPU-Cloud-Image-tensorflow-2019.10.02
  • PyTorch from NVIDIA Image: NVIDIA-GPU-Cloud-Image-pytorch-2019.10.02

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 18.04 LTS
  • NVIDIA Driver:  418.87
  • Docker CE:   19.03.2-ce
  • NVIDIA Container Toolkit v1.0.5-1
  • TensorFlow container (TensorFlow from NVIDIA image): nvcr.io/nvidia/tensorflow:19.09-py3
  • PyTorch container (PyTorch from NVIDIA image): nvcr.io/nvidia/pytorch:19.09-py3

Key Changes

  • Updated Docker-CE to 19.03.2
  • Replaced the NVIDIA Container Runtime for Docker with the NVIDIA Container Toolkit

    Includes new command to run containers: docker run --gpus all <container>

Known Issues

Installing GPU drivers on the VM via a CUDA Install Succeeds Erroneously

Issue

Attempting to install CUDA on the VM will succeed, resulting in a potential conflict with the NVIDIA GPU driver included in the VM image.

Explanation

The configuration file to prevent driver installs is not working. This will be resolved in a later release of the VM image.

2.4. Version 19.08.1

Image Name

  • NGC Image: NVIDIA-GPU-Cloud-Image-2019.07.22
  • TensorFlow from NVIDIA Image: NVIDIA-GPU-Cloud-Image-tensorflow-2019.07.22
  • PyTorch from NVIDIA Image: NVIDIA-GPU-Cloud-Image-pytorch-2019.07.22

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 18.04 LTS
  • NVIDIA Driver:  418.87
  • Docker CE:   18.09.8-ce
  • NVIDIA Container Runtime for Docker: (nvidia-docker2) v2.1.0-1
  • TensorFlow container (TensorFlow from NVIDIA image): nvcr.io/nvidia/tensorflow:19.06-py3
  • PyTorch container (PyTorch from NVIDIA image): nvcr.io/nvidia/pytorch:19.06-py3

Key Changes

  • Updated NVIDIA Driver to version 418.87.
  • Updated Docker-CE to 18.09.8
  • Updated NVIDIA Container Runtime for Docker to v2.1.0-1

Known Issues

Installing GPU drivers on the VM via a CUDA Install Succeeds Erroneously

Issue

Attempting to install CUDA on the VM will succeed, resulting in a potential conflict with the NVIDIA GPU driver included in the VM image.

Explanation

The configuration file to prevent driver installs is not working. This will be resolved in a later release of the VM image.

2.5. Version 19.07.1

Image Name

  • NGC Image: NVIDIA-GPU-Cloud-Image-2019.07.10
  • TensorFlow from NVIDIA Image: NVIDIA-GPU-Cloud-Image-tensorflow-2019.07.10
  • PyTorch from NVIDIA Image: NVIDIA-GPU-Cloud-Image-pytorch-2019.07.10

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 18.04 LTS
  • NVIDIA Driver:  418.67
  • Docker CE:   18.09.7-ce
  • NVIDIA Container Runtime for Docker: (nvidia-docker2) v2.0.3
  • TensorFlow container (TensorFlow from NVIDIA image): nvcr.io/nvidia/tensorflow:19.06-py3
  • PyTorch container (PyTorch from NVIDIA image): nvcr.io/nvidia/pytorch:19.06-py3

Key Changes

Known Issues

Installing GPU drivers on the VM via a CUDA Install Succeeds Erroneously

Issue

Attempting to install CUDA on the VM will succeed, resulting in a potential conflict with the NVIDIA GPU driver included in the VM image.

Explanation

The configuration file to prevent driver installs is not working. This will be resolved in a later release of the VM image.

Version 19.05.1

Image Name

  • NGC Image: NVIDIA-GPU-Cloud-Image-2019.05.22
  • TensorFlow from NVIDIA Image: NVIDIA-GPU-Cloud-Image-tensorflow-2019.05.22
  • PyTorch from NVIDIA Image: NVIDIA-GPU-Cloud-Image-pytorch-2019.05.22

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 18.04 LTS
  • NVIDIA Driver:  418.67
  • Docker CE:   18.09.4-ce
  • NVIDIA Container Runtime for Docker: (nvidia-docker2) v2.0.3
  • TensorFlow container (TensorFlow from NVIDIA image): nvcr.io/nvidia/tensorflow:19.04-py3
  • PyTorch container (PyTorch from NVIDIA image): nvcr.io/nvidia/pytorch:19.04-py3

Key Changes

19.05.1

19.05.0

  • Initial release of the PyTorch from NVIDIA and TensorFlow from NVIDIA images
  • Updated the NVIDIA Driver to 418.67
  • Updated Docker to 18.09.4-ce

Known Issues

Installing GPU drivers on the VM via a CUDA Install Succeeds Erroneously

Issue

Attempting to install CUDA on the VM will succeed, resulting in a potential conflict with the NVIDIA GPU driver included in the VM image.

Explanation

The configuration file to prevent driver installs is not working. This will be resolved in a later release of the VM image.

2.7. Version 19.03.0

Image Name

  • NGC Image: NVIDIA-GPU-Cloud-Image-2019.03.18

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 18.04 LTS
  • NVIDIA Driver:  418.40.04
  • Docker CE:   18.09.2-ce
  • NVIDIA Container Runtime for Docker: (nvidia-docker2) v2.0.3

Key Changes

  • Updated the NVIDIA Driver to 418.40.04
  • Updated Docker to 18.09.2-ce

Known Issues

There are no known issues in this release.

2.8. Version 19.02.0

Image Name

  • NGC Image: NVIDIA-GPU-Cloud-Image-2019.02.28

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 18.04 LTS
  • NVIDIA Driver:  410.104
  • Docker CE:   18.09.1-ce
  • NVIDIA Container Runtime for Docker: (nvidia-docker2) v2.0.3

Key Changes

  • Updated the NVIDIA Driver to 410.104
  • Updated Docker to 18.09.1-ce

Known Issues

There are no known issues in this release.

2.9. Version 19.01.0

Image Name

NVIDIA-GPU-Cloud-Image-20190104

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 18.04 LTS
  • NVIDIA Driver:  410.79
  • Docker CE:   18.06.1
  • NVIDIA Container Runtime for Docker: (nvidia-docker2) v2.0.3

Key Changes

  • Updated the Ubuntu Server to 18.04 LTS.

Known Issues

There are no known issues in this release.

2.10. Version 18.11.1

Image Name

NVIDIA-GPU-Cloud-Image-20181121

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 16.04 LTS
  • NVIDIA Driver:  410.79
  • Docker CE:   18.06.1
  • NVIDIA Container Runtime for Docker: (nvidia-docker2) v2.0.3

Key Changes

  • Updated NVIDIA driver to Release 410 version 410.79.

Known Issues

There are no known issues in this release.

2.11. Version 18.09.1

Image Name

NVIDIA GPU Cloud image 18.09.1

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 16.04 LTS
  • NVIDIA Driver:  410.48
  • Docker CE:   18.06.1
  • NVIDIA Container Runtime for Docker: (nvidia-docker2) v2.0.3

Key Changes

  • Updated NVIDIA driver to Release 410 version 410.48.
  • Updated Docker CE to 18.06.1

Known Issues

There are no known issues in this release.

2.12. Version 18.08.0

Image Name

NVIDIA GPU Cloud image 18.08.0

Contents of the NVIDIA GPU Cloud Image

  • Ubuntu Server: 16.04 LTS
  • NVIDIA Driver:  396.44
  • Docker CE:   18.06-ce
  • NVIDIA Container Runtime for Docker: (nvidia-docker2) v2.0.3

Key Changes

  • Initial Release

Known Issues

There are no known issues in this release.

Notices

Notice

THE INFORMATION IN THIS GUIDE AND ALL OTHER INFORMATION CONTAINED IN NVIDIA DOCUMENTATION REFERENCED IN THIS GUIDE IS PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE INFORMATION FOR THE PRODUCT, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the product described in this guide shall be limited in accordance with the NVIDIA terms and conditions of sale for the product.

THE NVIDIA PRODUCT DESCRIBED IN THIS GUIDE IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED OR INTENDED FOR USE IN CONNECTION WITH THE DESIGN, CONSTRUCTION, MAINTENANCE, AND/OR OPERATION OF ANY SYSTEM WHERE THE USE OR A FAILURE OF SUCH SYSTEM COULD RESULT IN A SITUATION THAT THREATENS THE SAFETY OF HUMAN LIFE OR SEVERE PHYSICAL HARM OR PROPERTY DAMAGE (INCLUDING, FOR EXAMPLE, USE IN CONNECTION WITH ANY NUCLEAR, AVIONICS, LIFE SUPPORT OR OTHER LIFE CRITICAL APPLICATION). NVIDIA EXPRESSLY DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY OF FITNESS FOR SUCH HIGH RISK USES. NVIDIA SHALL NOT BE LIABLE TO CUSTOMER OR ANY THIRD PARTY, IN WHOLE OR IN PART, FOR ANY CLAIMS OR DAMAGES ARISING FROM SUCH HIGH RISK USES.

NVIDIA makes no representation or warranty that the product described in this guide will be suitable for any specified use without further testing or modification. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to ensure the product is suitable and fit for the application planned by customer and to do the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this guide. NVIDIA does not accept any liability related to any default, damage, costs or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this guide, or (ii) customer product designs.

Other than the right for customer to use the information in this guide with the product, no other license, either expressed or implied, is hereby granted by NVIDIA under this guide. Reproduction of information in this guide is permissible only if reproduction is approved by NVIDIA in writing, is reproduced without alteration, and is accompanied by all associated conditions, limitations, and notices.

Trademarks

NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.