Introduction
The NVIDIA® BlueField®-2 data processing unit (DPU) is a data center infrastructure on a chip optimized for traditional enterprise, high-performance computing (HPC), and modern cloud workloads, delivering a broad set of accelerated software-defined
networking, storage, security, and management services. BlueField-2 DPU enables organizations to transform their IT infrastructures into state-of-the-art data centers that are accelerated, fully programmable and armed with “zero trust” security to prevent data breaches and cyber-attacks.
By combining the industry-leading NVIDIA ConnectX®-6 Dx network adapter with an array of Arm® cores, BlueField-2 offers purpose-built, hardware-acceleration engines with full software programmability. Sitting at the edge of every server, BlueField-2 is optimized to handle critical infrastructure tasks quickly, increasing data center efficiency. BlueField-2 empowers agile and high-performance solutions for cloud networking, storage, cybersecurity, data analytics, HPC, and artificial intelligence (AI), from edge to core data centers and clouds, all while reducing the total cost of ownership. The NVIDIA DOCA software development kit (SDK) enables developers to create applications and services for the BlueField-2 DPU rapidly. The DOCA SDK makes it easy and straightforward to leverage DPU hardware accelerators and CPU programmability for better application performance and security.
The BlueField-2 DPU is designed and validated for operation in data-center servers and other large environments that guarantee proper power supply and airflow conditions.
The DPU is not intended for installation on a desktop or a workstation. Moreover, installing the DPU in any system without proper power and airflow levels can impact the DPU's functionality and potentially damage it. Failure to meet the environmental requirements listed in this user manual may void the warranty.
Item |
Description |
Main-board PCI Express slot |
x16 PCIe Gen 4.0 slot. |
System Power Supply |
Minimum 75W or greater system power supply for all cards. |
P-Series DPU controllers with PCIe Gen 4.0 x16 require an additional 75W through a supplementary 6-pin ATX power supply connector. |
|
Operating System |
BlueField-2 DPU is shipped with Ubuntu – a Linux commercial operating system – which includes the NVIDIA OFED stack (MLNX_OFED) and is capable of running all customer-based Linux applications seamlessly. BlueField-2 DPU also supports CentOS and has an out-of-band 1GbE management interface. For more information, please refer to the DOCA SDK documentation or NVIDIA BlueField-2 Software User Manual. |
Connectivity |
|
Before unpacking your DPU, it is important to ensure you meet all the system requirements listed above for a smooth installation. Be sure to inspect each piece of equipment shipped in the packing box. If anything is missing or damaged, contact your reseller.
Card Package
Item |
Description |
Card |
1x BlueField-2 DPU |
Accessories |
1x tall bracket (shipped assembled on the card) |
For MBF2M345A-HECOT/ MBF2M345A-HESOT only: 1x short bracket is included in the box (not assembled). For other cards: Short brackets should be ordered separately. |
Accessories Kit
The accessories kit should be ordered separately. Please refer to the below table and order the kit based on the desired DPU.
DPU OPN |
Accessories Kit OPN |
Contents |
MBF2H516A-EEEOT |
MBF20-DKIT |
1x USB 2.0 Type-A to Mini USB Type B cable |
1x USB 2.0 Type-A to 30pin Flat Socket |
||
All the rest of the DPUs |
MBF25-DKIT |
4-pin USB to female USB Type-A cable |
30-pin shrouded connector cable |
For FHHL 100Gb/s P-Series DPUs, you need a 6-pin PCIe external power cable to activate the card. The cable is not included in the package. For further details, please refer to External PCIe Power Supply Connector.
For detailed information, see Specifications.
This section describes hardware features and capabilities. Please refer to the relevant driver and/or firmware release notes for feature availability.
Feature |
Description |
||||||||||||||||||||||||
PCI Express (PCIe) |
Uses PCIe Gen 4.0 (16GT/s) through an x16 edge connector. Gen 1.1, 2.0, and 3.0 compatible. |
||||||||||||||||||||||||
InfiniBand Architecture Specification v1.4 compliant |
BlueField-3 DPU delivers low latency, high bandwidth, and computing efficiency for high-performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data center applications. InfiniBand Network Protocols and Rates:
|
||||||||||||||||||||||||
Up to 200 Gigabit Ethernet |
BlueField-2 DPU complies with the following IEEE 802.3 standards:
|
||||||||||||||||||||||||
On-board Memory |
|
||||||||||||||||||||||||
BlueField-2 DPU |
The NVIDIA BlueField-2 DPU integrates eight 64-bit Armv8 A72 cores interconnected by a coherent mesh network, one DRAM controller, and an RDMA intelligent network adapter supporting up to 200Gb/s, an embedded PCIe switch with endpoint and root complex functionality, and up to 16 lanes of PCIe Gen 3.0/4.0. |
||||||||||||||||||||||||
Overlay Networks |
To better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. NVIDIA DPU effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol. |
||||||||||||||||||||||||
RDMA and RDMA over Converged InfiniBand/VPI (RoCE) |
NVIDIA DPU, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged InfiniBand/VPI) technology, delivers low-latency and high-performance over InfiniBand/VPI networks. Leveraging data center bridging (DCB) capabilities as well as advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks. |
||||||||||||||||||||||||
NVIDIA PeerDirect™ |
PeerDirect communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), significantly reducing application run time. NVIDIA DPU's advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes. |
||||||||||||||||||||||||
Quality of Service (QoS) |
Support for port-based Quality of Service enabling various application requirements for latency and SLA. |
||||||||||||||||||||||||
Storage Acceleration |
|
||||||||||||||||||||||||
NVMe-oF |
Non-volatile Memory Express (NVMe) over Fabrics is a protocol for communicating block storage IO requests over RDMA to transfer data between a host computer and a target solid-state storage device or system over a network. NVIDIA BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host using its powerful NVMe over Fabrics Offload accelerator. |
||||||||||||||||||||||||
SR-IOV |
NVIDIA DPU SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. |
||||||||||||||||||||||||
High-Performance Accelerations |
|
||||||||||||||||||||||||
GPU Direct |
GPUDirect RDMA is a technology that provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the NVIDIA HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network. NVIDIA DPU uses high-speed DMA transfers to copy data between P2P devices resulting in more efficient system applications |
||||||||||||||||||||||||
Isolation |
BlueField-2 DPU functions as a “computer-in-front-of-a-computer,” unlocking unlimited opportunities for custom security applications on its Arm processors, fully isolated from the host’s CPU. In the event of a compromised host, BlueField-2 may detect/block malicious activities in real-time and at wire speed to prevent the attack from spreading further. |
||||||||||||||||||||||||
Cryptography Accelerations |
From IPsec and TLS data-in-motion inline encryption to AES-XTS block-level data-at-rest encryption and public key acceleration, BlueField-2 DPU hardware-based accelerations offload the crypto operations and free up the CPU, reducing latency and enabling scalable crypto solutions. BlueField-2 “host-unaware” solutions may transmit and receive data, while BlueField-2 acts as a bump-in-the-wire for crypto. |
||||||||||||||||||||||||
Securing Workloads |
BlueField-2 DPU accelerates connection tracking with its ASAP2 technology to enable stateful filtering on a per-connection basis. Moreover, BlueField-2 includes a Titan IC regular expression (RXP) acceleration engine supported by IDS/IPS tools to detect host introspection and Application Recognition (AR) in real-time. |
||||||||||||||||||||||||
Security Accelerators |
A consolidated compute and network solution based on DPU achieves significant advantages over a centralized security server solution. Standard encryption protocols and security applications can leverage NVIDIA BlueField-2 compute capabilities and network offloads for security application solutions such as Layer4 Stateful Firewall. |
||||||||||||||||||||||||
Virtualized Cloud |
By leveraging BlueField-2 DPU virtualization offloads, data center administrators can benefit from better server utilization, allowing more virtual machines and more tenants on the same hardware while reducing the TCO and power consumption |
||||||||||||||||||||||||
Out-of-Band Management |
The NVIDIA BlueField-2 DPU incorporates a 1GbE RT45 out-of-band port that allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It can also be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components. |
||||||||||||||||||||||||
BMC |
Some DPUs incorporate local NIC BMC (Baseboard Management Controller) hardware on the board. The BMC SoC (system on a chip) can utilize either shared or dedicated NICs for remote access. The BMC node enables remote power cycling, board environment monitoring, BlueField-2 chip temperature monitoring, board power, and consumption monitoring, and individual interface resets. The BMC also supports the ability to push a boot stream to BlueField-2. |
||||||||||||||||||||||||
PPS IN/OUT |
NVIDIA offers a full IEEE 1588v2 PTP software solution, as well as time-sensitive related features called “5T”. NVIDIA PTP and 5T software solutions, are designed to meet the most demanding PTP profiles. BlueField-2 incorporates an integrated Hardware Clock (PHC) that allows BlueField-2 to achieve sub-20u Sec accuracy and offers many timings-related functions such as time-triggered scheduling or time-based SND accelerations. The PTP part supports the subordinate clock, master clock, and boundary clock. BlueField-2 PTP solution allows you to run any PTP stack on your host and on the DPU arm. The MMCX on the board allows customers to connect an RF cable without occupying space on the external bracket. Enabled in MBF2M516C-EESOT and MBF2M516C-EECOT. |