image image image image image image



On This Page

Created on Jun 30, 2019

Introduction

In this document we will demonstrate a deployment procedure of RDMA accelerated applications running on Linux Containers (LXC) and Mellanox end-to-end 100 Gb/s InfiniBand (IB) solution.

This document describes the process of building the LXD Container from sources for Ubuntu 16.04.2 LTS and LXD 2.16 on physical servers.
We will show how to update and install the Mellanox software and hardware components on host and on LXD container.

References

Setup Overview

 Equipment

Server Logical Design

Server Wiring

 In our reference we'll wire 1st port to InfiniBand switch and will not use a 2nd port.

Network Configuration

The will use in our install setup two servers.

Each servers will connected to the SB7700 switch by a 100Gb IB copper cable. The switch port connectivity in our case is as follow:

  • 1st -2th ports – connected to Host servers

Server names with network configuration provided below

Server typeServer name IP and NICS
Internal networkExternal network
Server 01clx-mld-41ib0: 12.12.12.41eno1: From DHCP (reserved)
Server 02clx-mld-42ib0: 12.12.12.42eno1: From DHCP (reserved)

Deployment Guide

Prerequisites

Update Ubuntu Software Packages

To update/upgrade Ubuntu software packages, run the commands below.

Server CLI
sudo apt-get update            # Fetches the list of available update
sudo apt-get upgrade -y        # Strictly upgrades the current packages

Enable the Subnet Manager(SM) on the IB Switch

Refer to the MLNX-OS User Manual to become familiar with switch software (located at  support.mellanox.com ).
Before starting to use of the Mellanox switch, we recommend that you upgrade the switch to the latest MLNX-OS version.

There are three options to select the best place to locate the SM:

  1. Enabling the SM on one of the managed switches. This is a very convenient and quick operation and make InfiniBand ‘plug & play’ easily.
  2. Run /etc/init.d/opensmd on one or more servers. It is recommended to run the SM on a server in case there are 648 nodes or more.
  3. Use Unified Fabric Management (UFM®) Appliance dedicated server. UFM offers much more than the SM. UFM needs more compute power than the existing switches have, but does not require an expensive server. It does represent additional cost for the dedicated server.

We'll explain options 1 and 2 only

Option 1: Configuring the SM on a Switch MLNX-OS® all Mellanox switch systems.
To enable the SM on one of the managed switches follow the next steps.

  1. Login to the switch and enter to config mode:

    Switch Console
    Mellanox MLNX-OS Switch Management 
    
    switch login: admin 
    Password:  
    Last login: Wed Aug 12 23:39:01 on ttyS0 
    
    Mellanox Switch 
    
    switch [standalone: master] > enable 
    switch [standalone: master] # conf t 
    switch [standalone: master] (config)#
  2. Run the command: 

    Switch Console
    switch [standalone: master] (config)#ib sm 
    switch [standalone: master] (config)#
  3. Check if the SM is running. Run:

    Switch Console
    switch [standalone: master] (config)#show ib sm 
    enable 
    switch [standalone: master] (config)#

To save the configuration (permanently), run:

Switch Console
switch (config) # configuration write

  

Option 2: Configuring the SM on a Server (Skip this procedure if you enable SM on switch)

To start up OpenSM on a server, simply run opensm from the command line on your management node by typing:

Server CLI
opensm

Or:

Start OpenSM automatically on the head node by editing the  /etc/opensm/opensm.conf  file.

Create a configuration file by running:

Server CLI
opensm –config /etc/opensm/opensm.conf

Edit /etc/opensm/opensm.conf file with the following line:

Server CLI
onboot=yes

Upon initial installation, OpenSM is configured and running with a default routing algorithm. When running a multi-tier fat-tree cluster, it is recommended to change the following options to create the most efficient routing algorithm delivering the highest performance:

Server CLI
–routing_engine=updn

For full details on other configurable attributes of OpenSM, see the “OpenSM – Subnet Manager” chapter of the Mellanox OFED for Linux User Manual.

Installation Mellanox OFED for Ubuntu on a Host

This chapter describes how to install and test the Mellanox OFED for Linux package on a single host machine with Mellanox ConnectX®-5 adapter card installed. For more information click on Mellanox OFED for Linux User Manual.

Downloading Mellanox OFED

  1. Verify that the system has a Mellanox network adapter (HCA/NIC) installed.

    Server CLI
    lspci -v | grep Mellanox

    The following example shows a system with an installed Mellanox HCA:

  2. Download the ISO image according to you OS to your host.
    The image’s name has the format 
    MLNX_OFED_LINUX-<ver>-<OS label><CPUarch>.iso. You can download it from:
    http://www.mellanox.com > Products > Software > Infiniband/VPI Drivers > Mellanox OFED Linux (MLNX_OFED) > Download.
     


  3. Use the MD5SUM utility to confirm the downloaded file’s integrity. Run the following command and compare the result to the value provided on the download page.

    Server CLI
    md5sum MLNX_OFED_LINUX-<ver>-<OS label>.tgz

Installing Mellanox OFED

MLNX_OFED is installed by running the mlnxofedinstall  script. The installation script, performs the following:

  • Discovers the currently installed kernel
  • Uninstalls any software stacks that are part of the standard operating system distribution or another vendor's commercial stack
  • Installs the MLNX_OFED_LINUX binary RPMs (if they are available for the current kernel)
  • Identifies the currently installed InfiniBand and Ethernet network adapters and automatically upgrades the firmware

The installation script removes all previously installed Mellanox OFED packages and re-installs from scratch. You will be prompted to acknowledge the deletion of the old packages.

  1. Log into the installation machine as root.
  2. Copy the downloaded tgz to  /tmp

    Server CLI
    cd /tm
    tar -xzvf MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64.tgz
    cd MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64/
  3. Run the installation script.

    Server CLI
    ./mlnxofedinstall
  4. Reboot after the installation finished successfully.

    Server CLI
    /etc/init.d/openibd restart
    reboot

    By default both ConnectX®-5 VPI ports are initialized as InfiniBand ports.

  5. Disable unused the 2nd port on the device(optional). 
    Identify PCI ID of your NIC ports:

    Server CLI
    lspci | grep Mellanox
    05:00.0 Infiniband controller: Mellanox Technologies Device 1019
    
    05:00.1 Infiniband controller: Mellanox Technologies Device 1019

    Disable 2nd port

    Server CLI
    echo 0000:05:00.1 > /sys/bus/pci/drivers/mlx5_core/unbind
  6. Check the ports’ mode is InfiniBand

    Server CLI
    ibv_devinfo

     

  7. If you see the following - You need to change the interfaces port type to InfiniBand


    Change the interfaces port type to InfiniBand mode ConnectX®-5 ports can be individually configured to work as InfiniBand or Ethernet ports. 
    Change the mode to InfiniBand. Use the mlxconfig script after the driver is loaded.
    * LINK_TYPE_P1=1 is a InfiniBand mode
    a. Start mst and see ports names

    Server CLI
    mst start
    mst status

    b. Change the mode of both ports to InfiniBand:

    Server CLI
    mlxconfig -d /dev/mst/mt4121_pciconf0 s LINK_TYPE_P1=1

    Port 1 set to IB mode

    Server CLI
    reboot

    After each reboot you need to Disable 2nd port.
    c. Queries InfiniBand devices and prints about them information that is available for use from userspace.

    Server CLI
    ibv_devinfo 
  8. Run the ibdev2netdev utility to see all the associations between the Ethernet devices and the IB devices/ports.

    Server CLI
    ibdev2netdev
    
    ifconfig ib0 12.12.12.41 netmask 255.255.255.0
  9. Insert to the  /etc/network/interfaces file the lines below after the following lines:

    Server CLI
    vim /etc/network/interfaces

    Example:

    Server CLI - vi editor
    auto eno1
    
    iface eno1 inet dhcp
    
    The new lines:auto ib0
    iface ib0 inet static
    address 12.12.12.41
    netmask 255.255.255.0
  10. Check the network configuration is set correctly.

    Server CLI
    ifconfig -a

LXD installing and configured

LXD installing

To install the LXD (current version 2.16), we recommend use official Ubuntu PPA (Personal Package Archive):

Server CLI
sudo apt-add-repository ppa:ubuntu-lxc/stable
sudo apt update
sudo apt dist-upgrade
sudo apt install lxd

LXD configuration

To config storage and network go through the whole LXD step by step setup with:

Server CLI
sudo lxd init

Here is an example execution of the init” command. In the example we configure the installation with default "dir" storage backend and with a “lxdbr0” bridge as a convenience.

This bridge comes unconfigured by default, offering only IPv6 link-local connectivity through an HTTP proxy.

A warm recommendation is ZFS as it supports all the features LXD needs to offer the fastest and most reliable container experience.

Server CLI
Do you want to configure a new storage pool (yes/no) [default=yes]? Enter

Name of the new storage pool [default=default]: Enter

Name of the storage backend to use (dir, btrfs, lvm) [default=dir]: Enter

Would you like LXD to be available over the network (yes/no) [default=no]? Enter

Would you like stale cached images to be updated automatically (yes/no) [default=yes]? Enter

Would you like to create a new network bridge (yes/no) [default=yes]?Enter

What should the new bridge be called [default=lxdbr0]? Enter

What IPv4 address should be used (CIDR subnet notation, "auto" or "none") [default=auto]? Enter

What IPv6 address should be used (CIDR subnet notation, "auto" or "none") [default=auto]? none


LXD has been successfully configured.

You can then look at the “ lxdbr0 ” bridge config with:

Server CLI
lxc network show lxdbr0

Its output is shown below.

Server CLI output
config:
  ipv4.address: 10.141.11.1/24

  ipv4.nat: "true"

  ipv6.address: none

description: ""

name: lxdbr0

type: bridge


Preparing Container's Network

Create a  /etc/dnsmasq.conf.lab file

Server CLI
vim /etc/dnsmasq.conf.lab

and add these lines:

Server CLI - vi editor
domain=lab-ml.cloudx.mlnx
# verbose
log-queries
log-dhcp
dhcp-option=6,8.8.8.8

Run following commands to change ipv4 network and add dnsmasq.conf.lab configuration:

Server CLI
lxc network set lxdbr0 ipv4.address 10.10.41.1/24                                                         
lxc network set lxdbr0 raw.dnsmasq "conf-file=/etc/dnsmasq.conf.lab" 

and  look at the “lxdbr0” bridge config with:

Server CLI
lxc network show lxdbr0

Its output is shown below.

Server CLI output
config:
  ipv4.address: 10.10.41.1/24
  ipv4.nat: "true"
  ipv6.address: none
  raw.dnsmasq: conf-file=/etc/dnsmasq.conf.lab
description: ""
name: lxdbr0
type: bridge


Changing LXD service configuration for container's static MAC and IP addresses (Optional)

Run this procedure on each host.

Edit lxd service file:

Server CLI
vim /lib/systemd/system/lxd.service

add following line ExecStartPost=/bin/bash -c 'rm -f /var/lib/lxd/networks/lxdbr0/dnsmasq.hosts && for i in {2..254}; do echo "00:16:3e:41:01:$(printf '%02x' $i),10.10.41.$i,c41$i" >> /var/lib/lxd/networks/lxdbr0/dnsmasq.hosts ; done'

( change c41 in another hosts ):

Server CLI
[Service]
EnvironmentFile=-/etc/environment
ExecStartPre=/usr/lib/x86_64-linux-gnu/lxc/lxc-apparmor-load
ExecStart=/usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
ExecStartPost=/usr/bin/lxd waitready --timeout=600
ExecStartPost=/bin/bash -c 'rm -f /var/lib/lxd/networks/lxdbr0/dnsmasq.hosts && for i in {2..254}; do echo "00:16:3e:41:01:$(printf '%02x' $i),10.10.41.$i,c41$i" >> /var/lib/lxd/networks/lxdbr0/dnsmasq.hosts ; done'

 Restart the lxd service:

Server CLI
systemctl daemon-reload 
killall -SIGHUP dnsmasq 
service lxd restart 
service lxd status

Check /var/lib/lxd/networks/lxdbr0/dnsmasq.hosts file:

Server CLI
cat /var/lib/lxd/networks/lxdbr0/dnsmasq.hosts
Server CLI output
00:16:3e:41:01:02,10.10.41.2,c412
00:16:3e:41:01:03,10.10.41.3,c413
00:16:3e:41:01:04,10.10.41.4,c414
00:16:3e:41:01:05,10.10.41.5,c415
00:16:3e:41:01:06,10.10.41.6,c416  
...          


If you don't see it, please rerun lxd service and check again:

Server CLI
service lxd restart


Check LXD service status:

Server CLI
service lxd status
Server CLI output
lxd.service - LXD - main daemon
   Loaded: loaded (/lib/systemd/system/lxd.service; indirect; vendor preset: enabled)
  Drop-In: /etc/systemd/system/lxd.service.d
           override.conf
   Active: active (running) since Thu 2017-08-10 14:57:33 IDT; 3min 38s ago
     Docs: man:lxd(1)
  Process: 6406 ExecStartPost=/bin/bash -c rm -f /var/lib/lxd/networks/lxdbr0/dnsmasq.hosts && for i in {2..254};
  Process: 6326 ExecStartPost=/usr/bin/lxd waitready --timeout=600 (code=exited, status=0/SUCCESS)
  Process: 6314 ExecStartPre=/usr/lib/x86_64-linux-gnu/lxc/lxc-apparmor-load (code=exited, status=0/SUCCESS)
Main PID: 6325 (lxd)
   Memory: 10.1M
      CPU: 324ms
   CGroup: /system.slice/lxd.service
           6325 /usr/bin/lxd --group lxd --logfile=/var/log/lxd/lxd.log
           6391 dnsmasq --strict-order --bind-interfaces --pid-file=/var/lib/lxd/networks/lxdbr0/dnsmasq.pid --e
Aug 10 14:57:33 clx-mld-41 dnsmasq[6391]: using local addresses only for domain lxd
Aug 10 14:57:33 clx-mld-41 dnsmasq[6391]: reading /etc/resolv.conf
Aug 10 14:57:33 clx-mld-41 dnsmasq[6391]: using local addresses only for domain lxd
Aug 10 14:57:33 clx-mld-41 dnsmasq[6391]: using nameserver 10.141.119.41#53
Aug 10 14:57:33 clx-mld-41 dnsmasq[6391]: using nameserver 8.8.8.8#53
Aug 10 14:57:33 clx-mld-41 dnsmasq[6391]: read /etc/hosts - 5 addresses
Aug 10 14:57:33 clx-mld-41 dnsmasq-dhcp[6391]: read /var/lib/lxd/networks/lxdbr0/dnsmasq.hosts
Aug 10 14:57:33 clx-mld-41 dnsmasq[6391]: read /etc/hosts - 5 addresses
Aug 10 14:57:33 clx-mld-41 dnsmasq-dhcp[6391]: read /var/lib/lxd/networks/lxdbr0/dnsmasq.hosts
Aug 10 14:57:33 clx-mld-41 systemd[1]: Started LXD - main daemon.

and add static routing on each host by run (sample on host 43 ):

Server CLI
sudo route add -net 10.10.42.0/24 gw 12.12.12.42
sudo route
Server CLI output
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.10.42.0      12.12.12.42    255.255.255.0   UG   0      0        0 ib1
10.10.41.0      *              255.255.255.0   U    0      0        0 lxdbr0
10.141.119.0    *              255.255.255.0   U    0      0        0 enp129s0f0
12.12.12.0      *              255.255.255.0   U    0      0        0 ib1


Preparing LXC Container

By default, LXD creates unprivileged containers. This means that root in the container is a non-root UID on the host. It is privileged against the resources owned by the container, but unprivileged with respect to the host, making root in a container roughly equivalent to an unprivileged user on the host. (The main exception is the increased attack surface exposed through the system call interface)

Briefly, in an unprivileged container, 65536 UIDs are 'shifted' into the container. For instance, UID 0 in the container may be 100000 on the host, UID 1 in the container is 100001, etc, up to 165535. The starting value for UIDs and GIDs, respectively, is determined by the 'root' entry the/etc/subuid and /etc/subgid files.

We need to request a container to run without a UID mapping by setting the security.privileged flag to true ( change it in default profile):

Server CLI
lxc profile set default security.privileged true
Note: However that in this case the root user in the container is the root user on the host.

Running verbs and RDMA-based applications on container, requires access to the host OS’s InfiniBand devices (uverbs interface). This access can granted to a container via run following command (Change default profile):

Server CLI
lxc profile device add default uverbs1 unix-char source=/dev/Infiniband/uverbs1

Some of a host’s InfiniBand devices can be seen by checking the contents of the /dev/Infiniband/ folder.

Server CLI
sudo  ls /dev/Infiniband
Server CLI output
issm0 issm1 rdma_cm ucm0 ucm1 umad0 umad1 uverbs0 uverbs1
Server CLI
sudo ibdev2netdev
Server CLI output
mlx5_0 port 1 ==> enp5s0f0 (Down)

mlx5_1 port 1 ==> ib0 (Up)


In our example, there are two mlx5_ devices on the host, resulting in two ucm, umad, and uverbs  interfaces in /dev/Infiniband.
At runtime, you choose which devices are exposed to which running containers.
For our example, when running a single container, you may choose to expose second  InfiniBand to the running container.

To show default profile run:

Server CLI
lxc profile show default

You should see output similar to the following:

Server CLI output
config:
  environment.http_proxy: ""
  security.privileged: "true"
  user.network_mode: ""
description: Default LXD profile
devices:
  eth0:
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
  uverbs1:
    source: /dev/Infiniband/uverbs1
    type: unix-char
name: default


Creating new Container

The syntax to create is:

Server CLI
lxc init images:{distro}/{version}/{arch} {container-name-here}

To create Ubuntu 16.04 container that will use all 8 GPUs use following commands:

Server CLI
lxc init ubuntu:16.04 c432

 Set static MAC address to the container:

Server CLI
lxc config set c412 volatile.eth0.hwaddr "00:16:3e:41:01:02"

That will create a new ubuntu 16.04 container as can be confirmed with:

Server CLI
lxc list

To push installs file to the container, use:

Server CLI
lxc file push MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64.tgz c412/tmp/

Another optional, make file sharing, mount a share directory to the container to access the installer and example files.

Server CLI
lxc config device add c412 installs disk source=/root/installs path=/root/installs

Starting container:

Server CLI
lxc start c412

 To gain login and gain shell access in the container c412 , enter:

Server CLI
lxc exec c412 -- bash

Installing Container 

Update Ubuntu Software Packages

To update/upgrade Ubuntu software packages, run the commands below.

Container CLI
sudo apt-get update            # Fetches the list of available update
sudo apt-get upgrade -y        # Strictly upgrades the current packages 

Installation Mellanox OFED on a Container

Verify that the system has a Mellanox network adapter (HCA/NIC) installed.

Container CLI
apt-get install pciutils
lspci -v | grep Mellanox


Installing Mellanox OFED

MLNX_OFED is installed by running the mlnxofedinstall  script. The installation script, performs the following:

  • Discovers the currently installed kernel
  • Uninstalls any software stacks that are part of the standard operating system distribution or another vendor's commercial stack
  • Installs the MLNX_OFED_LINUX binary RPMs (if they are available for the current kernel)
  • Identifies the currently installed InfiniBand and Ethernet network adapters and automatically upgrades the firmware

The installation script removes all previously installed Mellanox OFED packages and re-installs from scratch. You will be prompted to acknowledge the deletion of the old packages.

  1. Install required packages:

    Container CLI
    apt-get install -y net-tools ethtool perl lsb-release iproute2
  2. Log into the installation machine as root.

    Container CLI
    cd /tmp
    tar -xzvf MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64.tgz
    cd MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64/
  3. Run the installation script.

    Container CLI
    ./mlnxofedinstall --user-space-only --without-fw-update -q
  4. Check the mofed version and uverbs:

    Container CLI
    ofed_info -s

    MLNX_OFED_LINUX-4.1-1.0.2.0:

    Container CLI
    ls /dev/Infiniband/
    Container CLI output
    uverbs1
  5. Run Bandwidth stress over IB in container:

Server

ib_write_bw -a -d mlx5_1 &

Client

ib_write_bw -a -F $Server_IP -d mlx5_1 --report_gbits

In this way you can run Bandwidth stress over IB between containers.


Done!

Notice

This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. Neither NVIDIA Corporation nor any of its direct or indirect subsidiaries and affiliates (collectively: “NVIDIA”) make any representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.
NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.
Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.
NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.
NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.
NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.
No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.
Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.
THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.

Trademarks
NVIDIA, the NVIDIA logo, and Mellanox are trademarks and/or registered trademarks of NVIDIA Corporation and/or Mellanox Technologies Ltd. in the U.S. and in other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

Copyright
© 2022 NVIDIA Corporation & affiliates. All Rights Reserved.