The NVIDIA® BlueField®-2 data processing unit (DPU) is a data center infrastructure on a chip optimized for traditional enterprise, high-performance computing (HPC), and modern cloud workloads, delivering a broad set of accelerated software-defined networking, storage, security, and management services. BlueField-2 DPU enables organizations to transform their IT infrastructures into state-of-the-art data centers that are accelerated, fully programmable, and armed with “zero trust” security to prevent data breaches and cyber-attacks.
By combining the industry-leading NVIDIA ConnectX®-6 Dx network adapter with an array of Arm® cores, BlueField-2 offers purpose-built, hardware-acceleration engines with full software programmability. Sitting at the edge of every server, BlueField-2 is optimized to handle critical infrastructure tasks quickly, increasing data center efficiency. BlueField-2 empowers agile and high-performance solutions for cloud networking, storage, cybersecurity, data analytics, HPC, and artificial intelligence (AI), from edge to core data centers and clouds, all while reducing the total cost of ownership. The NVIDIA DOCA software development kit (SDK) enables developers to rapidly create applications and services for the BlueField-2 DPU. The DOCA SDK makes it easy and straightforward to leverage DPU hardware accelerators and CPU programmability for better application performance and security.
System Requirements
Item | Description |
---|---|
Main-board PCI Express slot | x8 or x16 Gen 4.0 slot as per the PCIe interface with specified for the card. |
System Power Supply | Minimum 75W or greater system power supply for all cards. |
P-Series DPU controllers with PCIe Gen 4.0 x16 require additional 75W through a supplementary 6-pin ATX power supply connector. | |
Operating System | BlueField-2 DPU is shipped with Ubuntu – a Linux commercial operating system – which includes the NVIDIA OFED stack (MLNX_OFED), and is capable of running all customer-based Linux applications seamlessly. BlueField-2 DPU also supports CentOS and has an out-of-band 1GbE management interface. For more information, please refer to the DOCA SDK documentation or NVIDIA BlueField-2 Software User Manual. |
Connectivity |
|
For detailed information, see Specifications.
Package Contents
Prior to unpacking your DPU, it is important to make sure you meet all the system requirements listed above for a smooth installation. Be sure to inspect each piece of equipment shipped in the packing box. If anything is missing or damaged, contact your reseller.
Card Package
Item | Description |
---|---|
Cards | 1x BlueField-2 DPU |
Accessories | 1x tall bracket (shipped assembled on the card). For MBF2M345A-HECOT/MBF2M345A-HESOT only: 1x short bracket is included in box (not assembled). For other cards: Short brackets should be ordered separately. |
For MBS2M512C-AECOT and MBS2M512C-AESOT only: 1x Screw (hexalobular low profile head D4.8 M3X2) for installing an SSD memory device. (SSD device is not included in package.) |
Accessories Kit
The accessories kit should be ordered separately. Please refer to the below table and order the kit based on the desired DPU.
DPU OPN | Accessories Kit OPN | Contents |
---|---|---|
MBF2H516A-CEEOT MBF2H516A-CENOT MBF2M516A-CECOT MBF2M516A-CEEOT MBF2M516A-CENOT | MBF20-DKIT | 1x USB 2.0 Type A to Mini USB Type B cable |
1x USB 2.0 Type A to 30pin Flat Socket | ||
All the rest of the DPUs | MBF25-DKIT | 4-pin USB to female USB Type A cable |
30-pin shrouded connector cable |
For FHHL 100Gb/s P-Series DPUs, you need a 6-pin PCIe external power cable to activate the card. The cable is not included in the package. For further details, please refer to External PCIe Power Supply Connector.
Features and Benefits
This section describes hardware features and capabilities.
Please refer to the relevant driver and/or firmware release notes for feature availability.
Feature | Description |
---|---|
PCI Express (PCIe) | Uses PCIe Gen 4.0 (16GT/s) through an x8/x16 edge connector. Gen 1.1, 2.0, and 3.0 compatible. |
Up to 200 Gigabit Ethernet |
|
On-board Memory |
|
BlueField-2 DPU | The BlueField-2 DPU integrates eight 64-bit Armv8 A72 cores interconnected by a coherent mesh network, one DRAM controller, an RDMA intelligent network adapter supporting up to 200Gb/s, an embedded PCIe switch with endpoint and root complex functionality, and up to 16 lanes of PCIe Gen 4.0. |
Overlay Networks | In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. DPU effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol. |
RDMA and RDMA over Converged Ethernet (RoCE) | DPU, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks. |
NVIDIA PeerDirect | NVIDIA PeerDirect communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. DPU advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes. |
Quality of Service (QoS) | Support for port-based Quality of Service enabling various application requirements for latency and SLA. |
Storage Acceleration | A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage RDMA for high-performance storage access.
BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host, isolating part of the storage media from the host, or enabling abstraction of software-defined storage logic |
NVMe-oF | Nonvolatile Memory Express (NVMe) over Fabrics is a protocol for communicating block storage IO requests over RDMA to transfer data between a host computer and a target solid-state storage device or system over a network. BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host using its powerful NVMe over Fabrics Offload accelerator. |
SR-IOV | DPU SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. |
GPU Direct | The latest advancement in GPU-GPU communications is GPUDirect RDMA. This new technology provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network. DPU uses high-speed DMA transfers to copy data between P2P devices resulting in more efficient system applications |
Crypto | The BlueField-2 DPU crypto enabled versions include a BlueField-2 IC which supports accelerated cryptographic operations. In addition to specialized instructions for bulk cryptographic processing in the Arm cores, an offload hardware engine accelerates public-key cryptography and random number generation are enabled. |
Security Accelerators | A consolidated compute and network solution based on DPU achieves significant advantages over a centralized security server solution. Standard encryption protocols and security applications can leverage BlueField-2 compute capabilities and network offloads for security application solutions such as Layer4 Stateful Firewall. |
Out-of-Band Management | The BlueField-2 DPU incorporates a 1GbE RJ45 out-of-band port that allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It can also be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components. |
BMC | Some DPUs incorporate local NIC BMC (Baseboard Management Controller) hardware on the board. The BMC SoC (system on a chip) can utilize either shared or dedicated NICs for remote access. The BMC node enables remote power cycling, board environment monitoring, BlueField-2 chip temperature monitoring, board power and consumption monitoring, and individual interface resets. The BMC also supports the ability to push a bootstream to BlueField-2. |
Direct-Attached Storage | Some DPUs enable the installation of an SSD device on the card (not provided as part of the package) via interfaces such as M.2. This local storage device can be presented to the a data-center network as an NVMf device, sharing the volume of all deployed cards between all nodes in the domain. |
PPS IN/OUT | NVIDIA offers a full IEEE 1588v2 PTP software solution, as well as time-sensitive related features called “5T”. NVIDIA PTP and 5T software solutions are designed to meet the most demanding PTP profiles. BlueField-2 incorporates an integrated Hardware Clock (PHC) that allows BlueField-2 to achieve sub 20u Sec accuracy and offers many timings-related functions such as time-triggered scheduling or time-based SND accelerations. The PTP part supports the subordinate clock, master clock, and boundary clock. BlueField-2 PTP solution allows you to run any PTP stack on your host and on the DPU arm. The MMCX on the board allows customers to connect an RF cable without occupying space on the external bracket. Enabled in MBF2M516C-CESOT, MBF2M516C-CECOT, MBF2H512C-AECOT, MBF2H512C-AESOT, MBF2H532C-AECOT and MBF2H532C-AESOT. |