Supported Interfaces

This section describes the DPU-supported interfaces. Each numbered interface referenced in the figures is described in the following table with a link to detailed information.

Warning

The below figures are for illustration purposes only and might not reflect the current revision of the DPU.

OPN

DPU Component Side

DPU Print Side

HHHL Single-Slot DPU

Model: B3140H

900-9D3D4-00EN-HA0

900-9D3D4-00NN-HA0

image2023-5-18_13-54-29.png

image2023-5-18_14-54-43.png

Item

Interface

Description

1

DPU SoC

DPU IC 8/16 Arm-Cores

2

Networking Interface

The network traffic is transmitted through the DPU QSFP112 connectors. The QSFP112 connectors allow the use of modules and optical and passive cable interconnect solutions

3

Networking Ports LEDs Interface

One bi-color I/O LEDs per port to indicate link and physical status

4

PCI Express Interface

PCIe Gen 5.0 through an x16 edge connector

5

DDR5 SDRAM On-Board Memory

Single-Channel Cards: 10 units of DDR5 SDRAM for a total of 16GB @ 5200MT/s 64bit + 8bit ECC, solder-down memory

6

NC-SI Management Interface

NC-SI 20 pins BMC connectivity for remote management

7

USB 4-pin RA Connector

Used for OS image loading

8

1GbE OOB Management Interface

1GbE BASE-T OOB management interface

9

Integrated BMC

DPU BMC

10

SSD Interface 

128GB

11

RTC Battery

Battery holder for RTC

12

eMMC 

x8 NAND flash

OPN

DPU Component Side

DPU Print Side

FHHL Single-Slot Dual-Port DPUs

with PCIe Extension Option

Model: B3220 and B3210

900-9D3B6-00CV-AA0

900-9D3B6-00SV-AA0

900-9D3B6-00CC-EA0

900-9D3B6-00SC-EA0

900-9D3B6-00CC-AA0

900-9D3B6-00SC-AA0

image2023-2-9_10-28-7.png

image2023-2-9_10-29-0.png

FHHL Single-Slot Single-Port DPUs

Model: B3140L

900-9D3B4-00EN-EA0

900-9D3B4-00PN-EA0

image2023-2-9_10-34-40.png

image2023-2-9_10-34-1.png

FHHL Dual-Slot Dual-Port DPUs

Model: B3240

900-9D3B6-00CN-AB0

900-9D3B6-00SN-AB0

image2023-5-18_13-52-9.png

image2023-5-18_14-51-57.png

Item

Interface

Description

1

DPU System on Chip

  • BlueField-3 P-Series - 16 Arm-Cores - 560MHz/2133MHz

  • BlueField-3 E-Series - 8 Arm-Cores - 505MHz/2000MHz

2

Networking Interface

The network traffic is transmitted through the DPU QSFP112 connectors. The QSFP112 connectors allow the use of modules and optical and passive cable interconnect solutions

3

Networking Ports LEDs Interface

One bi-color I/O LEDs per port to indicate link and physical status

4

PCI Express Interface

PCIe Gen 5.0 through an x16 edge connector

5

DDR5 SDRAM On-Board Memory

Single-Channel Cards: 10 units of DDR5 SDRAM for a total of 16GB @ 5600MT/s 64bit + 8bit ECC, solder-down memory

Dual-Channel Cards: 20 units of DDR5 SDRAM for a total of 32GB @ 5600MT/s. 128bit + 16bit ECC, solder-down memory

6

NC-SI Management Interface

NC-SI 20 pins BMC connectivity for remote management

7

USB 4-pin RA Connector

Used for OS image loading

8

1GbE OOB Management Interface

1GbE BASE-T OOB management interface

9

MMCX RA PPS IN/OUT

Allows PPS IN/OUT

10

External PCIe Power Supply Connector

An external 12V power connection through an 8-pin ATX connector

11

Cabline CA-II Plus Connectors 

Two Cabline CA-II plus connectors are populated to allow connectivity to an additional PCIe x16 Auxiliary card

Applies to OPNs: Applies to OPNs 900-9D3B6-00CV-AA0, 900-9D3B6-00SV-AA0, 900-9D3B6-00CC-AA0, 900-9D3B6-00SC-AA0, 900-9D3B6-00CN-AB0, 900-9D3B6-00SN-AB0

12

Integrated BMC

DPU BMC

13

SSD Interface 

128GB

14

RTC Battery

Battery holder for RTC

15

eMMC 

x8 NAND flash

DPU System-on-Chip (SoC)

NVIDIA® BlueField®-3 DPU is a family of advanced DPU IC solutions that integrate a coherent mesh of 64-bit Armv8.2+ A78 Hercules cores , an NVIDIA® ConnectX®-7 network adapter front-end, and a PCI Express switch into a single chip. The powerful DPU IC architecture includes an Armv multicore processor array, enabling customers to develop sophisticated applications and highly differentiated feature sets. Leverages the rich Arm software ecosystem and introduces the ability to offload the x86 software stack.

At the heart of BlueField-3, the ConnectX-7 network offload controller with RDMA and RDMA over Converged Ethernet (RoCE) technology delivers cutting-edge performance for networking and storage applications such as NVMe over Fabrics. Advanced features include an embedded virtual switch with programmable access lists (ACLs), transport offloads, and stateless encaps/decaps of NVGRE, VXLAN, and MPLS overlay protocols.

Encryption

Warning

Applies to Crypto enabled OPNs.

DPU addresses the concerns of modern data centers by combining hardware encryption accelerators with embedded software and fully integrated advanced network capabilities, making it an ideal platform for developing proprietary security applications. It enables a distributed security architecture by isolating and protecting each workload and providing flexible control and visibility at the server and workload level; controlling risk at the server access layer builds security into the DNA of the data center and enables prevention, detection, and response to potential threats in real-time. DPU can deliver powerful functionality, including encryption of data-in-motion, bare-metal provisioning, stateful L4 firewall, and more.

Networking Interface

Warning

The DPU includes special circuits to protect the card/server from ESD shocks when plugging copper cables.

The network ports are compliant with the InfiniBand Architecture Specification, Release 1.5. InfiniBand traffic is transmitted through the cards' QSFP112 connectors.

Networking Ports LEDs Interface

image2023-2-9_10-36-55.png

One bicolor (Yellow and Green) I/O LED per port indicates speed and link status.

Link Indications

State

Bi-Color LED (Yellow/Green)

Beacon command for locating the adapter card

1Hz blinking Yellow

image2022-3-17_6-31-7.png

Error

4Hz blinking Yellow Indicates an error with the link. The error can be one of the following:

Error Type

Description

LED Behavior

I2C

I2C access to the networking ports fails

Blinks until error is fixed

Over-current

Over-current condition of the networking ports

Blinks until error is fixed

image2022-3-17_6-30-52.png

Physical Activity

Blinking Green

image2022-3-17_6-31-25.png

Link Up

Solid Green

image2022-3-17_6-31-42.png

Physical Up (InfiniBand Mode Only)

Solid Yellow

image2022-3-17_6-44-57.png


PCI Express Interface

The DPU supports PCI Express Gen 5.0/4.0 through x16 edge connectors. Some cards allow connectivity to an additional PCIe x16 Auxiliary card through the Cabline CA-II Plus connectors.

The following lists PCIe interface features:

  • PCIe Gen 5.0, 4.0, 3.0, 2.0 and 1.1 compatible

  • 2.5, 5.0, or 8.0, 16.0 or 32.0 GT/s link rate x16 lanes

  • Auto-negotiates to x16, x8, x4, x2, or x1

DDR5 SDRAM On-Board Memory

The DPUs incorporate 10 or 20 units of DDR5 SDRAM. See the following table for DDR5 SDRAM memory specifications per ordering part number.

Model

OPNs

DDR5 SDRAM On-Board Memory

B3140L

B3140H

900-9D3B4-00EN-EA0 / 900-9D3B4-00PN-EA0

900-9D3D4-00EN-HA0 / 900-9D3D4-00NN-HA0

Single-channel with 10 DDR5 + ECC (64bit + 8bit ECC) for a total of 16GB @ 5200MT/s

B3220

B3210

B3240

900-9D3B6-00CV-AA0 / 900-9D3B6-00SV-AA0

900-9D3B6-00CC-AA0 / 900-9D3B6-00SC-AA0

900-9D3B6-00CN-AB0 / 900-9D3B6-00SN-AB0

Dual-channel with 20 DDR5 + ECC (128bit + 16bit ECC) for a total of 32GB @ 5600MT/s


NC-SI Management Interface

The DPU enables the connection of a Baseboard Management Controller (BMC) to a set of Network Interface Controller (NICs) to enable out-of-band remote manageability. The NC-SI management is supported over RMII and has a connector on the DPU. Please refer to NC-SI Management Interface for pins.

UART Interface Connectivity

A UART debug interface is available on DPU cards via a 20-pin NC-SI connector. For DPUs without onboard BMC hardware, the UART interface is that of the BlueField-3 device. For DPUs with onboard BMC hardware, the UART interface is that of the NIC BMC device. The connectivity for both cases is shown in the following table:

NC-SI Connector Pin #

Signal on DPU without BMC

Signal on DPU with BMC

14

BF_UART0_RX

BMC_RX5

16

BF_UART0_TX

BMC_TX5

12

GND

GND

The UART interface is compliant with the TTL 3.3V voltage level. Use a USB-to-UART cable that supports TTL voltage levels to connect the UART Interface for Arm console access.

Important

It is prohibited to connect any RS-232 cable directly! Only TTL 3.3V voltage level cables are supported.

Warning

Do not use the USB-to-UART cable for NC-SI management purposes.


USB 4-pin RA Connector

The USB 4-pin RA USB connector is used to load operating system images. Use a 4-pin male connector to a male Type-A cable to connect to the board.

USB-4pin-to-TypeA.PNG

Important

It is prohibited to connect male-to-male to host, it is only used for a disk on key.

Warning

The male connector to the male Type-A cable is not included in the shipped DPU card box and should be ordered separately as part of the accessories kit (P/N: MBF35-DKIT).


1GbE OOB Management Interface

The DPU incorporates a 1GbE RJ45 out-of-band port that allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It can also be used to ensure management connectivity (including the ability to determine the status of any network component) independent of other in-band network components' status.

Warning

For DPUs with integrated BMC: 1GbE OOB Management can be performed via the integrated BMC.

1GbE OOB Management LEDs Interface

Two OOB management LEDs, one Green and one Yellow, behave as described in the table below.

Green LED

Yellow LED

Link/Activity

OFF

OFF

Link off

ON

OFF

1 Gb/s link / No activity

Blinking

OFF

1 Gb/s link / Activity (RX,TX)

OFF

ON

Not supported

OFF

Blinking

ON

ON

Blinking

Blinking

PPS IN/OUT Interface

The DPU incorporates an integrated Hardware Clock (PHC) that allows the DPU to achieve sub-20u Sec accuracy and also offers many timing-related functions such as time-triggered scheduling or time-based SND accelerations (time-based ASAP²). Furthermore, 5T technology enables the software application to transmit fronthaul (ORAN) at high bandwidth. The PTP part supports the subordinate clock, master clock, and boundary clock.

The DPU PTP solution allows you to run any PTP stack on your host.

With respect to testing and measurements, selected NVIDIA DPUs allow you to use the PPS-out signal from the onboard MMCX RA connecter. The DPU also allows measuring PTP in scale with the PPS-In signal. The PTP HW clock on the Network adapter is sampled on each PPS-In signal, and the timestamp is sent to the SW.

External PCIe Power Supply Connector

Warning

Applies to following DPUs only. The external ATX power cable is not supplied with the DPU package; however, this is a standard cable usually available in servers.

B3220 DPUs: 900-9D3B6-00CV-AA0 and 900-9D3B6-00SV-AA0.

B3240 DPUs: 900-9D3B6-00CN-AB0 and 900-9D3B6-00SN-AB0.

B3210 DPUs: 900-9D3B6-00CC-AA0 and 900-9D3B6-00SC-AA0.

The FHHL P-Series DPUs incorporate an external 12V power connection through an ATX 8-pin PCI connector (Molex 455860005). The DPU includes a special circuitry that provides current balancing between the two power supplies; the 12V from the PCIe x16 standard slot and the 12V from the ATX 8-pin connector. Since the power provided by the PCIe golden fingers is limited to 75W, a total maximum of up to 150W is enabled through the ATX 8-pin connector and the PCIe x16 golden fingers (the ATX 8-pin connector draws its power from the server and can supply up to 150W, per ATX specifications).

The maximum power consumption which does not exceed 150W, is in accordance with the mode of operation of the DPU, and is split between the two power sources as follows:

  • Up to 66W from the PCIe golden fingers (12V)

  • The rest of the consumed power is drawn from the external PCIe power supply connector

Important Notes and Warnings

  • The BlueField-3 DPU accommodates a standard PCIe Aux power connection. However, certain servers may necessitate a custom setup to enable ATX power compatibility.

  • Prior to connecting the Aux power, it is advisable to use a multi-meter to measure the power supply.

  • Do not link the CPU power cable to the BlueField-3 DPU's PCIe Aux power, as their pin configurations differ. Utilizing the CPU power cable in this manner is strictly prohibited and can potentially damage the BlueField-3 DPU. Please refer to External PCIe Power Supply Connector Pins for the external PCIe power supply pins.

  • It is preferable that the x16 PCIe golden fingers and the PCI ATX connector power supply both draw from the same power source. For more information on how to power-up the card, refer to DPU Power-Up Instructions.

  • If you are uncertain about your server's compatibility with the PCI ATX connection, please reach out to your NVIDIA representative for assistance.

Cabline CA-II Plus Connectors

Warning

Applies to the following OPNs:

B3220 DPUs: 900-9D3B6-00CV-AA0 and 900-9D3B6-00SV-AA0.
B3240 DPUs: 900-9D3B6-00CN-AB0 and 900-9D3B6-00SN-AB0.
B3210 DPUs: 900-9D3B6-00CC-AA0 and 900-9D3B6-00SC-AA0.

The Cabline CA-II connectors on the DPU enable connectivity to an additional PCIe x16 bus in addition to the PCIe x16 bus available through the golden-fingers. The Cabline CA-II Plus connectors allow connectivity to flash cards and NVMe SSD drives.

Specific applications have an interest in direct connectivity to the far end of the Cabline CA-II cables, through the two 60-pin Cabline CA-II connectors, directly to the motherboard, in order to cut the insertion loss and/or the additional space associated with a PCIe x16 Flash Auxiliary Board.

The Cabline CA-II connectors mate with two 60-pin Cabline CA-II cables that can be distinguished by their black or white external insulators and connector pinouts. The black Cabline CA-II cable mates with the DPU's component (top) side, whereas the white Cabline CA-II cable mates with the DPU print (bottom) side. The Cabline CA-II cables are offered in three standard lengths; 150mm, 350mm, and 550mm.

For connector pinouts, please refer to Cabline CA-II Plus Connectors Pinouts.

Integrated BMC Interface

Warning

The BMC Interface applies to DPUs with integrated BMC only.

The DPU incorporates an onboard integrated NIC BMC and an Ethernet switch. The BMC becomes available once the host server powers up the card. The NIC BMC can control the DPU's power and enables DPU shutdown and power-up.

NVMe SSD Interface

Warning

The Self Encrypting Disk (SED) capability is not supported.

The on-board 128GB client-grade NVMe SSD is utilized for non-persistent storage of user applications and logs. It is important to note that all SSD devices come with a limitation on the total number of write operations they can handle throughout their lifespan. This limit is influenced significantly by the software use case and specific parameters like block size and the pattern of data access (whether it is sequential or random).

It is the customer's responsibility to oversee the rate at which the SSD ages during both the validation of the code and its usage in the field, ensuring that it aligns with the intended use case.

RTC Battery

The DPU incorporates a Coin type Lithium battery CR621 for RTC (Real Time Clock).

eMMC Interface

The eMMC is an x8 NAND flash used for Arm boot and operating system storage. Memory size is 128GB, where it is effectively pSLC 40GB.

© Copyright 2023, NVIDIA. Last updated on Nov 27, 2023.