Supported Interfaces
This section describes the DPU supported interfaces. Each numbered interface that is referenced in the figures is described in the following table with a link to detailed information.
The below figures are for illustration purposes only and might not reflect the current revision of the BF2500 card.
Component Side
Print Side
Item |
Interface |
Description |
1 |
DPU |
DPU IC 8 cores |
2 |
Ethernet QSFP56 Interface |
The ethernet traffic is transmitted through the DPU QSFP56 connectors. The QSFP56 connectors allow the use of modules, optical and passive cable interconnect solutions. By default, the ports of this group of OPNs are set to operate in QSFP28 mode (default card firmware setting). |
3 |
PCI Express Interface |
PCIe Gen 4.0 through an x16 edge connector |
4 |
DDR4 SDRAM On-Board Memory |
Units of SDRAM for a total of 16 GB @ 3200MT/s single DDR4 channel, 64bit + 8bit ECC, solder-down memory |
5 |
NC-SI Management Interface |
BMC connectivity for remote management |
6 |
Mini USB Type B Interface |
Used for OS image loading |
7 |
1GbE OOB Management Interface |
1GbE BASE-T OOB management interface |
8 |
External PCIe Power Supply Connector |
An external 12V power connection through a 6-pin ATX connector. |
9 |
Networking Ports LEDs Interface |
One bi-color I/O LED per port to indicate link and physical status |
10 |
RTC Battery |
CR621 battery holder for RTC |
11 |
eMMC Interface |
x8 NAND flash |
DPU
NVIDIA® BlueField®-2 DPU is a family of advanced DPU IC solutions that integrate a coherent mesh of 64-bit Arm v8 A72 cores, an NVIDIA® ConnectX®-6 Dx network adapter front-end and a PCI Express switch into a single chip. The powerful DPU IC architecture includes an Arm v8 multicore processor array and enables customers to develop sophisticated applications and highly differentiated feature sets. leverages the rich Arm software ecosystem and introduces the ability to offload the x86 software stack.
At the heart BlueField-2, the ConnectX-6 Dx network offload controller with RDMA and RDMA over Converged Ethernet (RoCE) technology delivers cutting-edge performance for networking and storage applications such as NVMe over Fabrics. Advanced features include an embedded virtual switch with programmable access lists (ACLs), transport offloads and stateless encaps/decaps of NVGRE, VXLAN, and MPLS overlay protocols.
Encryption
Applies to Crypto enabled OPNs.
DPU addresses the concerns of modern data centers by combining hardware encryption accelerators with embedded software and fully integrated advanced network capabilities, making it an ideal platform for developing proprietary security applications. It enables a distributed security architecture by isolating and protecting each individual workload and providing flexible control and visibility at the server and workload level, controlling risk at the server access layer. builds security into the DNA of the data center and enables prevention, detection, and response to potential threats in real-time. DPU is capable of delivering powerful functionality, including encryption of data-in-motion, bare-metal provisioning, stateful L4 firewall and more.
Ethernet QSFP56 Interface
The network ports of the DPU are compliant with the IEEE 802.3 Ethernet standards listed in Features and Benefits. Ethernet traffic is transmitted through the cards' QSFP56 connectors. Note that the ports operate in QSFP28 mode by default.
PCI Express Interface
The DPU supports PCI Express Gen4.0 (3.0, 2.0, and 1.1 compatible) through an x16 edge connector. The following lists PCIe interface features:
PCIe Gen 4.0 compliant, and 3.0, 2.0 and 1.1 compatible
2.5, 5.0, or 8.0, or 16.0 GT/s link rate x16 lanes
Auto-negotiates to x16, x8, x4, x2, or x1
Support for MSI/MSI-X mechanisms
DDR4 SDRAM On-Board Memory
The DPU incorporates 16GB @ 3200MT/s single DDR4 channel, 64bit + 8bit ECC, solder-down memory.
NC-SI Management Interface
The DPU enables the connection of a Baseboard Management Controller (BMC) to a set of Network Interface Controller (NICs) for the purpose of enabling out-of-band remote manageability. The NC-SI management is supported over RMII and has a connector on the DPU. Please refer to NC-SI Management Interface for pins.
UART Interface Connectivity
A UART debug interface is available on the DPU cards via 3 pins of a 30-pin NC-SI connector (described in NC-SI Management Interface). The connectivity is shown in the following table:
NC-SI Connector Pin # |
Signal on DPU |
30 |
BF_UART0_RX |
28 |
BF_UART0_TX |
26 |
GND |
The UART interface is compliant with TTL 3.3V voltage level. A USB to UART cable that supports TTL voltage levels should be used to connect the UART Interface for Arm console access - see example below.
It is prohibited to directly connect any RS-232 cable! Only TTL 3.3V voltage level cables are supported.
The USB to UART cable is not used for NC-SI management purposes.
USB Interfaces
The controllers use a mini-USB Type B connector to load operating system images.
1GbE OOB Management Interface
The DPU incorporates a 1GbE RJ45 out-of-band port that allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It can also be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components.
10Mb/s and 100Mb/s modes are not supported on this interface.
1GbE OOB Management LEDs Interface
There are 2 OOB management LEDs, one green and one amber/yellow. The following table describes LED behavior for DPUs with or with on-board BMC.
LED Indications |
Link Activity |
|
Green LED |
Amber/Yellow LED |
|
OFF |
OFF |
Link off |
ON |
OFF |
Link on / No activity |
Blinking |
OFF |
1 Gb/s link / Activity (RX,TX) |
Other combinations |
Not supported |
RTC Battery
The DPU incorporates a COIN TYPE LITHIUM BATTERY CR621 for RTC (Real Time Clock).
eMMC Interface
The DPU incorporates an eMMC interface on the card's print side. The eMMC is an x8 NAND flash and is used for Arm boot, operating system storage and disk space. Memory size is 64GB.
External PCIe Power Supply Connector
Applies to FHHL P-Series DPUs with x16 PCIe Gen 4 lanes only, which require supplementary power to be fed via the DPU's on-board 6-pin ATX power supply connector. The power cable that should be connected to this on-board ATX connector is not supplied with the DPU; however, this is a standard cable that is normally available in servers.
The FHHL P-Series DPUs with x16 PCIe Gen 4 lanes incorporate an external 12V power connection through a 6-pin ATX connector. The DPU includes a special circuitry that provides current balancing between the two power supplies; the 12V from the PCIe x16 standard slot and the 12V from the ATX 6-pin connector. Since the power provided by the PCIe golden fingers is limited to 75W, a total maximum of up to 150W is enabled through both the ATX 6-pin connector and the PCIe x16 golden fingers. The actual power consumption is in accordance to the mode of operation of the DPU and is split evenly between the two power sources.
For the pinout of the on-board 6-pin ATX connector, please refer to External PCIe Power Supply Connector Pins.
Networking Ports LEDs Interface
There is one bicolor (Yellow and Green) I/O LED per port to indicate speed and link status.
Link Indications
State |
Bi-Color LED (Yellow/Green) |
|||||||||
Beacon command for locating the adapter card |
1Hz blinking Yellow |
|||||||||
Error |
4Hz blinking Yellow Indicates an error with the link. The error can be one of the following:
|
|||||||||
Physical Activity |
• A constant Green indicates a link with the maximum networking speed. |
|||||||||
Link Up |
• A constant Green indicates a link with the maximum networking speed. |