The NVIDIA® BlueField®-3 data processing unit (DPU) is the 3rd-generation data center infrastructure-on-a-chip that enables organizations to build software-defined, hardware-accelerated IT infrastructures from cloud to core data center to edge. With 400Gb/s Ethernet or NDR 400Gb/s InfiniBand network connectivity, BlueField-3 DPU offloads, accelerates, and isolates software-defined networking, storage, security, and management functions in ways that profoundly improve data center performance, efficiency, and security. Providing powerful computing, and a broad range of programmable acceleration engines in the I/O path, BlueField-3 is perfectly positioned to address the infrastructure needs of the most demanding applications, while delivering full software backward compatibility through the NVIDIA DOCA™ software framework.
BlueField-3 DPUs transform traditional computing environments into secure and accelerated virtual private clouds, allowing organizations to run application workloads in secure, multi-tenant environments. Decoupling data center infrastructure from business applications, BlueField-3 enhances data center security, streamlines operations, and reduces the total cost of ownership. Featuring NVIDIA’s in-network computing technology, BlueField-3 enables the next generation of supercomputing platforms, delivering optimal bare-metal performance and native support for multi-node tenant isolation.
Item |
Description |
PCI Express slot |
In PCIe x16 Configuration PCIe Gen 5.0 (32GT/s) through x16 edge connector. In PCIe x16 Extension Option - Switch DSP (Data Stream Port)
|
System Power Supply |
Minimum 75W or greater system power supply for all cards. |
P-Series DPUs with PCIe Gen 5.0 x16 require a supplementary 8-pin ATX power supply connectivity available through the external power supply connector.
|
|
Operating System |
BlueField-3 DPU is shipped with Ubuntu – a Linux commercial operating system – which includes the NVIDIA OFED stack (MLNX_OFED), and is capable of running all customer-based Linux applications seamlessly. For more information, please refer to the DOCA SDK documentation or NVIDIA BlueField-3 Software User Manual. |
Connectivity |
|
For detailed information, see Specifications.
Prior to unpacking your DPU, it is important to make sure your server meets all the system requirements listed above for a smooth installation. Be sure to inspect each piece of equipment shipped in the packing box. If anything is missing or damaged, contact your reseller.
Card Package
For FHHL P-Series DPUs, you need an 8-pin PCIe external power cable to activate the card. The cable is not included in the package. For further details, please refer to External PCIe Power Supply Connector.
Item |
Description |
Card |
1x BlueField-3 DPU |
Accessories |
1x tall bracket (shipped assembled on the card) |
Accessories Kit
This is an optional accessories kit used for debugging purposes and can be ordered separately.
Kit OPN |
Contents |
MBF35-DKIT |
4-pin USB to female USB Type-A cable |
20-pin shrouded connector to USB Type-A cable |
PCIe Auxiliary Card Package
This is an optional kit which applies to following OPNs: 900-9D3B6-00CV-AA0, 900-9D3B6-00SV-AA0, 900-9D3B6-00CC-AA0, 900-9D3B6-00SC-AA0, 900-9D3B6-00CN-AB0 and 900-9D3B6-00SN-AB0.
The PCIe auxiliary kit can be purchased separately to operate selected DPUs in a dual-socket server. For package contents, refer to PCIe Auxiliary Card Kit.
This section describes hardware features and capabilities. Please refer to the relevant driver and/or firmware release notes for feature availability.
Feature |
Description |
|||||||||||||||||||||||||||||
InfiniBand Architecture Specification v1.5 compliant |
BlueField-3 DPU delivers low latency, high bandwidth, and computing efficiency for high-performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data centers applications. BlueField-3 DPU is InfiniBand Architecture Specification v1.5 compliant. InfiniBand Network Protocols and Rates:
|
|||||||||||||||||||||||||||||
Up to 400 Gigabit Ethernet |
BlueField-3 DPU complies with the following IEEE 802.3 standards: 400GbE / 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE
|
|||||||||||||||||||||||||||||
On-board Memory |
|
|||||||||||||||||||||||||||||
BlueField-3 IC |
The NVIDIA BlueField-3 DPU integrates x8 / x16 Armv8.2+ A78 Hercules cores (64-bit) is interconnected by a coherent mesh network, one DRAM controller, an RDMA intelligent network adapter supporting up to 400Gb/s, an embedded PCIe switch with endpoint and root complex functionality, and up to 32 lanes of PCIe Gen 5.0. |
|||||||||||||||||||||||||||||
Overlay Networks |
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. NVIDIA DPU effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol. |
|||||||||||||||||||||||||||||
RDMA and RDMA over Converged InfiniBand/Ethernet (RoCE) |
NVIDIA DPU, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged InfiniBand/Ethernet) technology, delivers low-latency and high-performance over InfiniBand/Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks. |
|||||||||||||||||||||||||||||
Quality of Service (QoS) |
Support for port-based Quality of Service enabling various application requirements for latency and SLA. |
|||||||||||||||||||||||||||||
Storage Acceleration |
|
|||||||||||||||||||||||||||||
NVMe-oF |
Non-volatile Memory Express (NVMe) over Fabrics is a protocol for communicating block storage IO requests over RDMA to transfer data between a host computer and a target solid-state storage device or system over a network. NVIDIA BlueField-3 DPU may operate as a co-processor offloading specific storage tasks from the host using its powerful NVMe over Fabrics Offload accelerator. |
|||||||||||||||||||||||||||||
SR-IOV |
NVIDIA DPU SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. |
|||||||||||||||||||||||||||||
High-Performance Accelerations |
|
|||||||||||||||||||||||||||||
GPU Direct |
GPUDirect RDMA is a technology that provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the NVIDIA HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network. NVIDIA DPU uses high-speed DMA transfers to copy data between P2P devices resulting in more efficient system applications |
|||||||||||||||||||||||||||||
Isolation |
BlueField-3 DPU functions as a “computer-in-front-of-a-computer,” unlocking unlimited opportunities for custom security applications on its Arm processors, fully isolated from the host’s CPU. In the event of a compromised host, BlueField-3 may detect/block malicious activities in real-time and at wire speed to prevent the attack from spreading further. |
|||||||||||||||||||||||||||||
Cryptography Accelerations |
From IPsec and TLS data-in-motion inline encryption to AES-XTS block-level data-at-rest encryption and public key acceleration, BlueField-3 DPU hardware-based accelerations offload the crypto operations and free up the CPU, reducing latency and enabling scalable crypto solutions. BlueField-3 “host-unaware” solutions may transmit and receive data, while BlueField-3 acts as a bump-in-the-wire for crypto. |
|||||||||||||||||||||||||||||
Securing Workloads |
BlueField-3 DPU accelerates connection tracking with its ASAP2 technology to enable stateful filtering on a per-connection basis. Moreover, BlueField-3 includes a Titan IC regular expression (RXP) acceleration engine supported by IDS/IPS tools to detect host introspection and Application Recognition (AR) in real-time. |
|||||||||||||||||||||||||||||
Security Accelerators |
A consolidated compute and network solution based on DPU achieves significant advantages over a centralized security server solution. Standard encryption protocols and security applications can leverage NVIDIA BlueField-3 compute capabilities and network offloads for security application solutions such as Layer4 Statefull Firewall. |
|||||||||||||||||||||||||||||
Virtualized Cloud |
By leveraging BlueField-3 DPU virtualization offloads, data center administrators can benefit from better server utilization, allowing more virtual machines and more tenants on the same hardware, while reducing the TCO and power consumption |
|||||||||||||||||||||||||||||
Out-of-Band Management |
The NVIDIA BlueField-3 DPU incorporates a 1GbE RJ45 out-of-band port that allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It can also be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components. |
|||||||||||||||||||||||||||||
BMC |
Some BlueField-3 DPUs incorporate local NIC BMC (Baseboard Management Controller) hardware on the board. The BMC SoC (system on a chip) can utilize either shared or dedicated NICs for remote access. The BMC node enables remote power cycling, board environment monitoring, BlueField-3 chip temperature monitoring, board power and consumption monitoring, and individual interface resets. The BMC also supports the ability to push a bootstream to BlueField-3. Having a trusted on-board BMC that is fully isolated for the host server ensures highest security for the DPU boards. |