Created on Jun 30, 2019
Introduction
In this document we will demonstrate a deployment procedure of RDMA accelerated applications running in Docker containers over NVIDIA end-to-end 100 Gb/s InfiniBand (IB) solution.
We use Ubuntu 16.04.6 LTS as host OS and install latest Docker CE (how-to install guide). We show how to update and install the NVIDIA software and hardware components on host and on Docker container.
For network communication each Docker has 2 devices:
- Linux bridge device for IP connectivity. Bridge is connected to host IPoIB interface
- Manually mapped InfiniBand uverbs device for RDMA traffic
This guide for Docker deployment with no K8s orchestration. For K8s deployment please proceed to this link: https://docs.mellanox.com/label/SOL/k8s
References
Setup Overview
Equipment
Server Logical Design
Docker Network Diagram
Server Wiring
In our reference we'll wire 1st port to InfiniBand switch and will not use a 2nd port.
Network Configuration
The will use in our install setup 4 servers.
Each servers will connected to the SB7700 switch by a 100Gb/s IB copper cable. The switch port connectivity in our case is as follow:
- 1st -4th ports – connected to Host servers
Server names with network configuration provided below
Server type | Server name | IP and NICs | |
---|---|---|---|
Internal network | External network | ||
Server 01 | clx-mld-41 | ib0: 12.12.12.41 | eno1: From DHCP (reserved) |
Server 02 | clx-mld-42 | ib0: 12.12.12.42 | eno1: From DHCP (reserved) |
Server 03 | clx-mld-43 | ib0: 12.12.12.43 | eno1: From DHCP (reserved) |
Server 04 | clx-mld-44 | ib0: 12.12.12.44 | eno1: From DHCP (reserved) |
Deployment Guide
Prerequisites
Update Ubuntu Software Packages
To update/upgrade Ubuntu software packages, run the commands below.
sudo apt-get update # Fetches the list of available update sudo apt-get upgrade -y # Strictly upgrades the current packages
Enable the Subnet Manager(SM) on the IB Switch
Refer to the MLNX-OS User Manual to become familiar with switch software (located at enterprise-support.nvidia.com/s/).
Before starting to use of the NVIDIA switch, we recommend that you upgrade the switch to the latest MLNX-OS version.
There are three options to select the best place to locate the SM:
- Enabling the SM on one of the managed switches. This is a very convenient and quick operation and make InfiniBand ‘plug & play’ easily.
- Run /etc/init.d/opensmd on one or more servers. It is recommended to run the SM on a server in case there are 648 nodes or more.
- Use Unified Fabric Management (UFM®) Appliance dedicated server. UFM offers much more than the SM.
|UFM needs more compute power than the existing switches have, but does not require an expensive server.
It does represent additional cost for the dedicated server.
We'll explain options 1 and 2 only
Option 1: Configuring the SM on a Switch MLNX-OS® all NVIDIA switch systems.
To enable the SM on one of the managed switches follow the next steps.
Login to the switch and enter to config mode:
Switch ConsoleNVIDIA MLNX-OS Switch Management switch login: admin Password: Last login: Wed Aug 12 23:39:01 on ttyS0 MNVIDIASwitch switch [standalone: master] > enable switch [standalone: master] # conf t switch [standalone: master] (config)#
Run the command:
Switch Consoleswitch [standalone: master] (config)#ib sm switch [standalone: master] (config)#
Check if the SM is running. Run:
Switch Consoleswitch [standalone: master] (config)#show ib sm enable switch [standalone: master] (config)#
To save the configuration (permanently), run:
switch (config) # configuration write
Option 2: Configuring the SM on a Server ( Skip this procedure if you enable SM on switch)
To start up OpenSM on a server, simply run opensm from the command line on your management node by typing:
opensm
Or:
Start OpenSM automatically on the head node by editing the /etc/opensm/opensm.conf file.
Create a configuration file by running:
opensm –config /etc/opensm/opensm.conf
Edit /etc/opensm/opensm.conf file with the following line:
onboot=yes
Upon initial installation, OpenSM is configured and running with a default routing algorithm. When running a multi-tier fat-tree cluster, it is recommended to change the following options to create the most efficient routing algorithm delivering the highest performance:
–routing_engine=updn
For full details on other configurable attributes of OpenSM, see the “OpenSM – Subnet Manager” chapter of the NVIDIA OFED for Linux User Manual.
Installation NVIDIA OFED for Ubuntu on a Host
This chapter describes how to install and test the NVIDIA OFED for Linux package on a single host machine with NVIDIA ConnectX®-5 adapter card installed. For more information click on NVIDIA OFED for Linux User Manual.
Downloading NVIDIA OFED
Verify that the system has a NVIDIA network adapter (HCA/NIC) installed.
Server Consolelspci -v | grep Mellanox
The following example shows a system with an installed NVIDIA HCA:
- Download the ISO image according to you OS to your host.
The image’s name has the format
MLNX_OFED_LINUX-<ver>-<OS label><CPUarch>.iso. You can download it from:
https://www.nvidia.com/en-us/networking/ > Products > Software > InfiniBand/VPI Drivers > NVIDIA MLNX_OFED > Download. Use the MD5SUM utility to confirm the downloaded file’s integrity. Run the following command and compare the result to the value provided on the download page.
Server Consolemd5sum MLNX_OFED_LINUX-<ver>-<OS label>.tgz
Installing NVIDIA OFED
MLNX_OFED is installed by running the mlnxofedinstall script. The installation script, performs the following:
- Discovers the currently installed kernel
- Uninstalls any software stacks that are part of the standard operating system distribution or another vendor's commercial stack
- Installs the MLNX_OFED_LINUX binary RPMs (if they are available for the current kernel)
- Identifies the currently installed InfiniBand and Ethernet network adapters and automatically upgrades the firmware
The installation script removes all previously installed NVIDIA OFED packages and re-installs from scratch. You will be prompted to acknowledge the deletion of the old packages.
- Log into the installation machine as root.
Copy the downloaded tgz to /tmp
Server Consolecd /tmp tar -xzvf MLNX_OFED_LINUX-4.5-1.0.1.0-ubuntu16.04-x86_64.tgz cd MLNX_OFED_LINUX-4.5-1.0.1.0-ubuntu16.04-x86_64/
Run the installation script.
Server Console./mlnxofedinstall
Reboot after the installation finished successfully.
Server Console/etc/init.d/openibd restart reboot
By default both ConnectX®-5 VPI ports are initialized as InfiniBand ports.
Check the ports’ mode is InfiniBand
Server Consoleibv_devinfo
If you see the following - You need to change the interfaces port type to InfiniBand
Change the interfaces port type to InfiniBand mode ConnectX®-5 ports can be individually configured to work as InfiniBand or Ethernet ports.
Change the mode to InfiniBand. Use the mlxconfig script after the driver is loaded.
* LINK_TYPE_P1=1 is a InfiniBand mode
a. Start mst and see ports namesServer Consolemst start mst status
b. Change the mode of both ports to InfiniBand:
Server Consolemlxconfig -d /dev/mst/mt4121_pciconf0 s LINK_TYPE_P1=1
Server Console OutputPort 1 set to IB mode
Server Consolereboot
c. Queries InfiniBand devices and prints about them information that is available for use from user-space.Server Consoleibv_devinfo
Run the ibdev2netdev utility to see all the associations between the Ethernet devices and the IB devices/ports.
Server Consoleibdev2netdev ifconfig ib0 12.12.12.41 netmask 255.255.255.0
Insert to the /etc/network/interfaces file the lines below after the following lines:
Server Consolevim /etc/network/interfaces
Example:
Sampleauto eno1 iface eno1 inet dhcp auto ib0 iface ib0 inet static address 12.12.12.41 netmask 255.255.255.0
Check the network configuration is set correctly.
Server Consoleifconfig -a
Docker installing and configured
Uninstall old versions
To uninstall old versions, we recommend run following command:
sudo apt-get remove docker docker-engine docker.io
It’s OK if apt-get reports that none of these packages are installed.
The contents of /var/lib/docker/, including images, containers, volumes, and networks, are preserved.
Install Docker CE
For Ubuntu 16.04 and higher, the Linux kernel includes support for OverlayFS, and Docker CE will use the overlay2 storage driver by default.
Install using the repository
Before you install Docker CE for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.
Set Up the repository
Update the apt package index:
Server Consolesudo apt-get update
Install packages to allow apt to use a repository over HTTPS:
Server Consolesudo apt-get install apt-transport-https ca-certificates curl software-properties-common
Add Docker’s official GPG key:
Server Consolesudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Verify that the key fingerprint is 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88.
Server Consolesudo apt-key fingerprint 0EBFCD88
Server Consolepub 4096R/0EBFCD88 2017-02-22 Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 uid Docker Release (CE deb) <docker@docker.com> sub 4096R/F273FCD8 2017-02-22
Install Docker CE
Install the latest version of Docker CE, or go to the next step to install a specific version. Any existing installation of Docker is replaced.
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get install docker-ce
Customize the docker0 bridge
The recommended way to configure the Docker daemon is to use the daemon.json file, which is located in /etc/docker/ on Linux. If the file does not exist, create it. You can specify one or more of the following settings to configure the default bridge network
{ "bip": "172.16.41.1/24", "fixed-cidr": "172.16.41.0/24", "mtu": 1500, "dns": ["8.8.8.8","8.8.4.4"] }
The same options are presented as flags to dockerd
, with an explanation for each:
-
--bip=CIDR
: supply a specific IP address and netmask for thedocker0
bridge, using standard CIDR notation. For example: 172.16.41.1/16. -
--fixed-cidr=CIDR
: restrict the IP range from thedocker0
subnet, using standard CIDR notation. For example:172.16.41.0/16
. -
--mtu=BYTES
: override the maximum packet length ondocker0
. For example: 1500.
-
--dns=[]
: The DNS servers to use. For example: --dns=8.8.8.8,8.8.4.4.
Restart Docker after making changes to the daemon.json file.
sudo /etc/init.d/docker restart
Set communicating to the outside world
Check ip forwarding in kernel:
sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1
If disabled
sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
please enable and check again:
sysctl net.ipv4.conf.all.forwarding=1
For security reasons, Docker configures the iptables rules to prevent containers from forwarding traffic from outside the host machine, on Linux hosts. Docker sets the default policy of the FORWARD chain to DROP.
To override this default behavior you can manually change the default policy:
sudo iptables -P FORWARD ACCEPT
Add IP route with specific subnet
Add routing for containers network on another hosts:
sudo ip route add 172.16.42.0/24 via 12.12.12.42 sudo ip route add 172.16.43.0/24 via 12.12.12.43 sudo ip route add 172.16.44.0/24 via 12.12.12.44
A quick check
Give your environment a quick test run to make sure you’re all set up:
docker run hello-world
Create or pull a base image and run Container
Docker can build images automatically by reading the instructions from a Dockerfile.
A Dockerfile
is a text document that contains all the commands a user could call on the command line to assemble an image.
Dockerfile
- Create an empty directory.
- Change directories (cd) into the new directory, create a file called Dockerfile, copy-and-paste the following content into that file, and save it.
Take note of the comments that explain each statement in your new Dockerfile.
FROM ubuntu16.04 # Set MOFED version, OS version and platform ENV MOFED_VER 4.5-1.0.1.0 ENV OS_VER ubuntu16.04 ENV PLATFORM x86_64 RUN apt-get update RUN apt-get -y install apt-utils RUN apt-get install -y --allow-downgrades --allow-change-held-packages --no-install-recommends \ build-essential cmake tcsh tcl tk \ make git curl vim wget ca-certificates \ iputils-ping net-tools ethtool \ perl lsb-release python-libxml2 \ iproute2 pciutils libnl-route-3-200 \ kmod libnuma1 lsof openssh-server \ swig libelf1 automake libglib2.0-0 \ autoconf graphviz chrpath flex libnl-3-200 m4 \ debhelper autotools-dev gfortran libltdl-dev && \ rm -rf /rm -rf /var/lib/apt/lists/* # Download and install NVIDIA OFED 4.5-1.0.1.0 for Ubuntu 16.04 RUN wget --quiet http://content.mellanox.com/ofed/MLNX_OFED-${MOFED_VER}/MLNX_OFED_LINUX-${MOFED_VER}-${OS_VER}-${PLATFORM}.tgz && \ tar -xvf MLNX_OFED_LINUX-${MOFED_VER}-${OS_VER}-${PLATFORM}.tgz && \ MLNX_OFED_LINUX-${MOFED_VER}-${OS_VER}-${PLATFORM}/mlnxofedinstall --user-space-only --without-fw-update -q && \ cd .. && \ rm -rf ${MOFED_DIR} && \ rm -rf *.tgz # Allow OpenSSH to talk to containers without asking for confirmation RUN cat /etc/ssh/ssh_config | grep -v StrictHostKeyChecking > /etc/ssh/ssh_config.new && \ echo " StrictHostKeyChecking no" >> /etc/ssh/ssh_config.new && \ mv /etc/ssh/ssh_config.new /etc/ssh/ssh_config
Docker Image and run a container
Now run the build command. This creates a Docker image, which we’re going to tag using -t so it has a friendly name.
Server Consoledocker build -t myofed451image .
Where is your built image? It’s in your machine’s local Docker image registry:
Server Consoledocker images
Run a Docker Container in privileged / not privileged mode from the remote repository by:
Server Consoledocker run -it --privileged --name=mnlx-verbs-prvlg --name=my-verbs-nonprvlg myofed451image bash
OR
Server Consoledocker run -it --cap-add=IPC_LOCK --device=/dev/infiniband/uverbs1 --name=my-verbs-nonprvlg myofed451image bash
Benchmark
Check the NVIDIA OFED version and uverbs:
ofed_info -s
MLNX_OFED_LINUX-4.5-1.0.1.0
ls /dev/infiniband/uverbs1
Run Bandwidth stress over IB in container.:
Server | ib_write_bw -a -d mlx5_1 & |
---|---|
Client | ib_write_bw -a -F $server_IP -d mlx5_1 --report_gbits |
In this way you can run Bandwidth stress over IB between containers.
Done!
Related Documents