Created on Jun 30, 2019
In this document we will demonstrate a deployment procedure of RDMA accelerated applications running in Docker containers over NVIDIA end-to-end 100 Gb/s InfiniBand (IB) solution.
We use Ubuntu 16.04.6 LTS as host OS and install latest Docker CE (how-to install guide). We show how to update and install the NVIDIA software and hardware components on host and on Docker container.
For network communication each Docker has 2 devices:
- Linux bridge device for IP connectivity. Bridge is connected to host IPoIB interface
- Manually mapped InfiniBand uverbs device for RDMA traffic
This guide for Docker deployment with no K8s orchestration. For K8s deployment please proceed to this link: https://docs.mellanox.com/label/SOL/k8s
Server Logical Design
Docker Network Diagram
In our reference we'll wire 1st port to InfiniBand switch and will not use a 2nd port.
The will use in our install setup 4 servers.
Each servers will connected to the SB7700 switch by a 100Gb/s IB copper cable. The switch port connectivity in our case is as follow:
- 1st -4th ports – connected to Host servers
Server names with network configuration provided below
|Server type||Server name||IP and NICs|
|Internal network||External network|
|Server 01||clx-mld-41||ib0: 184.108.40.206||eno1: From DHCP (reserved)|
|Server 02||clx-mld-42||ib0: 220.127.116.11||eno1: From DHCP (reserved)|
|Server 03||clx-mld-43||ib0: 18.104.22.168||eno1: From DHCP (reserved)|
|Server 04||clx-mld-44||ib0: 22.214.171.124||eno1: From DHCP (reserved)|
Update Ubuntu Software Packages
To update/upgrade Ubuntu software packages, run the commands below.
Enable the Subnet Manager(SM) on the IB Switch
Refer to the MLNX-OS User Manual to become familiar with switch software (located at enterprise-support.nvidia.com/s/).
Before starting to use of the NVIDIA switch, we recommend that you upgrade the switch to the latest MLNX-OS version.
There are three options to select the best place to locate the SM:
- Enabling the SM on one of the managed switches. This is a very convenient and quick operation and make InfiniBand ‘plug & play’ easily.
- Run /etc/init.d/opensmd on one or more servers. It is recommended to run the SM on a server in case there are 648 nodes or more.
- Use Unified Fabric Management (UFM®) Appliance dedicated server. UFM offers much more than the SM.
|UFM needs more compute power than the existing switches have, but does not require an expensive server.
It does represent additional cost for the dedicated server.
We'll explain options 1 and 2 only
Option 1: Configuring the SM on a Switch MLNX-OS® all NVIDIA switch systems.
To enable the SM on one of the managed switches follow the next steps.
Login to the switch and enter to config mode:
Run the command:
Check if the SM is running. Run:
To save the configuration (permanently), run:
Option 2: Configuring the SM on a Server ( Skip this procedure if you enable SM on switch)
To start up OpenSM on a server, simply run opensm from the command line on your management node by typing:
Start OpenSM automatically on the head node by editing the /etc/opensm/opensm.conf file.
Create a configuration file by running:
Edit /etc/opensm/opensm.conf file with the following line:
Upon initial installation, OpenSM is configured and running with a default routing algorithm. When running a multi-tier fat-tree cluster, it is recommended to change the following options to create the most efficient routing algorithm delivering the highest performance:
For full details on other configurable attributes of OpenSM, see the “OpenSM – Subnet Manager” chapter of the NVIDIA OFED for Linux User Manual.
Installation NVIDIA OFED for Ubuntu on a Host
This chapter describes how to install and test the NVIDIA OFED for Linux package on a single host machine with NVIDIA ConnectX®-5 adapter card installed. For more information click on NVIDIA OFED for Linux User Manual.
Downloading NVIDIA OFED
Verify that the system has a NVIDIA network adapter (HCA/NIC) installed.
The following example shows a system with an installed NVIDIA HCA:
- Download the ISO image according to you OS to your host.
The image’s name has the format
MLNX_OFED_LINUX-<ver>-<OS label><CPUarch>.iso. You can download it from:
https://www.nvidia.com/en-us/networking/ > Products > Software > InfiniBand/VPI Drivers > NVIDIA MLNX_OFED > Download.
Use the MD5SUM utility to confirm the downloaded file’s integrity. Run the following command and compare the result to the value provided on the download page.
Installing NVIDIA OFED
MLNX_OFED is installed by running the mlnxofedinstall script. The installation script, performs the following:
- Discovers the currently installed kernel
- Uninstalls any software stacks that are part of the standard operating system distribution or another vendor's commercial stack
- Installs the MLNX_OFED_LINUX binary RPMs (if they are available for the current kernel)
- Identifies the currently installed InfiniBand and Ethernet network adapters and automatically upgrades the firmware
The installation script removes all previously installed NVIDIA OFED packages and re-installs from scratch. You will be prompted to acknowledge the deletion of the old packages.
- Log into the installation machine as root.
Copy the downloaded tgz to /tmp
Run the installation script.
Reboot after the installation finished successfully.
By default both ConnectX®-5 VPI ports are initialized as InfiniBand ports.
Check the ports’ mode is InfiniBand
If you see the following - You need to change the interfaces port type to InfiniBand
Change the interfaces port type to InfiniBand mode ConnectX®-5 ports can be individually configured to work as InfiniBand or Ethernet ports.
Change the mode to InfiniBand. Use the mlxconfig script after the driver is loaded.
* LINK_TYPE_P1=1 is a InfiniBand mode
a. Start mst and see ports names
b. Change the mode of both ports to InfiniBand:
c. Queries InfiniBand devices and prints about them information that is available for use from user-space.
Run the ibdev2netdev utility to see all the associations between the Ethernet devices and the IB devices/ports.
Insert to the /etc/network/interfaces file the lines below after the following lines:
Check the network configuration is set correctly.
Docker installing and configured
Uninstall old versions
To uninstall old versions, we recommend run following command:
It’s OK if apt-get reports that none of these packages are installed.
The contents of /var/lib/docker/, including images, containers, volumes, and networks, are preserved.
Install Docker CE
For Ubuntu 16.04 and higher, the Linux kernel includes support for OverlayFS, and Docker CE will use the overlay2 storage driver by default.
Install using the repository
Before you install Docker CE for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.
Set Up the repository
Update the apt package index:
Install packages to allow apt to use a repository over HTTPS:
Add Docker’s official GPG key:
Verify that the key fingerprint is 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88.
Install Docker CE
Install the latest version of Docker CE, or go to the next step to install a specific version. Any existing installation of Docker is replaced.
Customize the docker0 bridge
The recommended way to configure the Docker daemon is to use the daemon.json file, which is located in /etc/docker/ on Linux. If the file does not exist, create it. You can specify one or more of the following settings to configure the default bridge network
The same options are presented as flags to
dockerd , with an explanation for each:
--bip=CIDR: supply a specific IP address and netmask for the
docker0bridge, using standard CIDR notation. For example: 172.16.41.1/16.
--fixed-cidr=CIDR: restrict the IP range from the
docker0subnet, using standard CIDR notation. For example:
--mtu=BYTES: override the maximum packet length on
docker0. For example: 1500.
--dns=: The DNS servers to use. For example: --dns=126.96.36.199,188.8.131.52.
Restart Docker after making changes to the daemon.json file.
Set communicating to the outside world
Check ip forwarding in kernel:
please enable and check again:
For security reasons, Docker configures the iptables rules to prevent containers from forwarding traffic from outside the host machine, on Linux hosts. Docker sets the default policy of the FORWARD chain to DROP.
To override this default behavior you can manually change the default policy:
Add IP route with specific subnet
Add routing for containers network on another hosts:
A quick check
Give your environment a quick test run to make sure you’re all set up:
Create or pull a base image and run Container
Docker can build images automatically by reading the instructions from a Dockerfile.
Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
- Create an empty directory.
- Change directories (cd) into the new directory, create a file called Dockerfile, copy-and-paste the following content into that file, and save it.
Take note of the comments that explain each statement in your new Dockerfile.
Docker Image and run a container
Now run the build command. This creates a Docker image, which we’re going to tag using -t so it has a friendly name.
Where is your built image? It’s in your machine’s local Docker image registry:
Run a Docker Container in privileged / not privileged mode from the remote repository by:
Check the NVIDIA OFED version and uverbs:
Run Bandwidth stress over IB in container.:
ib_write_bw -a -d mlx5_1 &
ib_write_bw -a -F $server_IP -d mlx5_1 --report_gbits
In this way you can run Bandwidth stress over IB between containers.