Configuring the Appliance for the First Time
The diagram below describes the connectivity scheme of the UFM High-Availability cluster.
The following are instructions on how to configure the management and fabric (InfiniBand) interfaces in the UFM cluster.
The NVIDIA UFM Enterprise Appliance has multiple Ethernet management interfaces. The primary management interface isenp99s0f0. The MAC address for enp99s0f0 is available on the pull tab and can be configured in the DHCP server. To use the remote management controller with DHCP, the free-range IP allocation must be enabled on the DHCP server.
The appliance supports a direct connection via a serial port.
For instructions on how to configure the management interface, please refer to Configuring the Appliance.
This interface should be used as the primary interface when configuring HA.
When operating in HA configuration, directly connect (back-to-back - without a management switch in the middle) the Master node to the Standby node. To do so, utilize the Ethernet management interface enp99s0f1, as shown in the above diagram.
For your convenience, you may use the CLI command Interface to set a static IP address for enp99s0f1.
Example:
interface
enp99s0f1 ip address 11.0
.0.11
/24
Configuring the fabric interface is optional.
The UFM XDR Enterprise Appliance has multiple InfiniBand interfaces. The primary interface is ib0.
Configure a static IPoIB with Network service (create the file /etc/network/interfaces.d/ifcfg-ib0
and run ifup ib0
).
Example of the ifcfg-ib0
file definition:
auto ib0
iface ib0 inet static
address 10.0
.0.12
netmask 255.255
.255.0
broadcast 10.0
.0.255
For your convenience, you may use the CLI command Interface to set a static IP address for ib0.
Example:
interface
ib0 ip address 192.168
.1.11
/24
For more details on how to configure the UFM Enterprise, please refer to UFM Enterprise Initial Configuration.
The FNM (Fabric Network Management) port is a separate OSFP InfiniBand in-band management port. It enables accessing the UFM Enterprise appliance that allows data center operators to efficiently monitor and operate the entire fabric.
Since the FNM port is semi-populated (only the first four lanes of it are wired), it functions as an HDR port.
XDR Cluster
Switch | Cable / Transceiver SKU on Switch Side | Cable / Transceiver SKU on UFM XDR-HCA Side* |
Quantum-3 Q3200 Switch Network Port | MMS4X00-NS / 980-9I30H-00NM00 (100m) | MMS4X00-NS400 / 980-9I31N-00NM00 (100m) |
Quantum-X800 Q3400 Switch Network Port | MMS4X00-NS / 980-9I30H-00NM00 (100m) | MMS4X00-NS400 / 980-9I31N-00NM00 (100m) |
Quantum-X800 Q3400 Switch FNM as XDR Lite (as 4 1x 100G) | MMS4X00-NS / 980-9I30H-00NM00 (100m) | MMS4X00-NS400 / 980-9I31N-00NM00 (100m) |
* UFM XDR Appliance side should configure the ConnectX-8 C8180 cards to be non-planarized. ** The 980-9I30H-00NM00 transceiver supports a 2x100 Gb/s configuration . |
NDR Cluster
To support this connectivity, change the default configuration of all the ConnectX®-8 XDR adapter cards by running:
mlxconfig -d <device> set NUM_OF_PLANES_P1=1
sudo reboot
By default, the ConnectX®-8 XDR adapters are configured for 4 planes, as it is typically connected to an XDR multi-plane switch. Since the NDR switch is single-plane, this parameter must be set to 1 to ensure proper operation with the NDR switch.
Switch | Cable / Transceiver SKU on Switch Side | Cable / Transceiver SKU on UFM XDR-HCA Side* |
Quantum-2 MQM97xx Network Port | MMA4Z00-NS / 980-9I510-00NS00 (50m) | MMA4Z00-NS400 / 980-9I51S-00NS00 (50m) |
Quantum-2 MQM97xx Network Port | MMS4X00-NS / 980-9I30H-00NM00 (100m) | MMS4X00-NS400 / 980-9I31N-00NM00 (100m) |
* UFM XDR Appliance side should configure the ConnectX-8 C8180 cards to be non-planarized. ** The 980-9I30H-00NM00 / 980-9I510-00NS00 transceivers support a 2x100 Gb/s configuration . |