NVIDIA BlueField-2 InfiniBand/Ethernet DPU User Guide
NVIDIA BlueField-2 InfiniBand_Ethernet DPU User Guide

Introduction

The NVIDIA® BlueField®-2 data processing unit (DPU) is a data center infrastructure on a chip optimized for traditional enterprise, high-performance computing (HPC), and modern cloud workloads, delivering a broad set of accelerated software-defined
networking, storage, security, and management services. BlueField-2 DPU enables organizations to transform their IT infrastructures into state-of-the-art data centers that are accelerated, fully programmable and armed with “zero trust” security to prevent data breaches and cyber-attacks.

By combining the industry-leading NVIDIA ConnectX®-6 Dx network adapter with an array of Arm® cores, BlueField-2 offers purpose-built, hardware-acceleration engines with full software programmability. Sitting at the edge of every server, BlueField-2 is optimized to handle critical infrastructure tasks quickly, increasing data center efficiency. BlueField-2 empowers agile and high-performance solutions for cloud networking, storage, cybersecurity, data analytics, HPC, and artificial intelligence (AI), from edge to core data centers and clouds, all while reducing the total cost of ownership. The NVIDIA DOCA software development kit (SDK) enables developers to create applications and services for the BlueField-2 DPU rapidly. The DOCA SDK makes it easy and straightforward to leverage DPU hardware accelerators and CPU programmability for better application performance and security.

Important

The BlueField-2 DPU is designed and validated for operation in data-center servers and other large environments that guarantee proper power supply and airflow conditions.

The DPU is not intended for installation on a desktop or a workstation. Moreover, installing the DPU in any system without proper power and airflow levels can impact the DPU's functionality and potentially damage it. Failure to meet the environmental requirements listed in this user manual may void the warranty.

Item

Description

Main-board PCI Express slot

x16 PCIe Gen 4.0 slot.

System Power Supply

Minimum 75W or greater system power supply for all cards.

P-Series DPU controllers with PCIe Gen 4.0 x16 require an additional 75W through a supplementary 6-pin ATX power supply connector.
NOTE: The connector is not included in the package. It should be part of system wiring, or it can be ordered separately as a system accessory.

Operating System

BlueField-2 DPU is shipped with Ubuntu – a Linux commercial operating system – which includes the NVIDIA OFED stack (MLNX_OFED) and is capable of running all customer-based Linux applications seamlessly. BlueField-2 DPU also supports CentOS and has an out-of-band 1GbE management interface. For more information, please refer to the DOCA SDK documentation or NVIDIA BlueField-2 Software User Manual.

Connectivity

  • Interoperable with 1/10/25/40/50/100/200 Gb/s Ethernet switches and SDR/DDR/QDR/FDR/EDR/HDR100/HDR InfiniBand switches

  • Interoperable with copper/optic cables and SR4 modules only

  • Powered connectors for optical and active cable support

Before unpacking your DPU, it is important to ensure you meet all the system requirements listed above for a smooth installation. Be sure to inspect each piece of equipment shipped in the packing box. If anything is missing or damaged, contact your reseller.

Card Package

Item

Description

Card

1x BlueField-2 DPU

Accessories

1x tall bracket (shipped assembled on the card)

For MBF2M345A-HECOT/ MBF2M345A-HESOT only: 1x short bracket is included in the box (not assembled).

For other cards: Short brackets should be ordered separately.

Accessories Kit

Warning

The accessories kit should be ordered separately. Please refer to the below table and order the kit based on the desired DPU.

DPU OPN

Accessories Kit OPN

Contents

MBF2H516A-EEEOT
MBF2H516A-EENOT
MBF2M516A-EECOT
MBF2M516A-EEEOT
MBF2M516A-EENOT

MBF20-DKIT

1x USB 2.0 Type-A to Mini USB Type B cable

1x USB 2.0 Type-A to 30pin Flat Socket

All the rest of the DPUs

MBF25-DKIT

4-pin USB to female USB Type-A cable

30-pin shrouded connector cable

Important

For FHHL 100Gb/s P-Series DPUs, you need a 6-pin PCIe external power cable to activate the card. The cable is not included in the package. For further details, please refer to External PCIe Power Supply Connector.

For detailed information, see Specifications.

Warning

This section describes hardware features and capabilities. Please refer to the relevant driver and/or firmware release notes for feature availability.

Feature

Description

PCI Express (PCIe)

Uses PCIe Gen 4.0 (16GT/s) through an x16 edge connector. Gen 1.1, 2.0, and 3.0 compatible.

InfiniBand Architecture Specification v1.4 compliant

BlueField-3 DPU delivers low latency, high bandwidth, and computing efficiency for high-performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data center applications.
BlueField-3 DPU is InfiniBand Architecture Specification v1.3 compliant.

InfiniBand Network Protocols and Rates:

Protocol

Standard

Rate (Gb/s)

Comments

4x Port
(4 Lanes)

2x Ports
(2 Lanes)

HDR/HDR100

IBTA Vol2 1.4

212.5

106.25

PAM4 256b/257b encoding and RS-FEC

EDR

IBTA Vol2 1.3.1

103.125

51.5625

NRZ 64b/66b encoding

FDR

IBTA Vol2 1.2

56.25

N/A

NRZ 64b/66b encoding

Up to 200 Gigabit Ethernet

BlueField-2 DPU complies with the following IEEE 802.3 standards:
200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE

Protocol

MAC Rate

IEEE802.3ck

200/100 Gigabit Ethernet
(Include ETC enhancement)

IEEE802.3cd
IEEE802.3bs
IEEE802.3cm
IEEE802.3cn
IEEE802.3cu

200/100 Gigabit Ethernet
(Include ETC enhancement)

IEEE 802.3bj
IEEE 802.3bm

100 Gigabit Ethernet

IEEE 802.3by
Ethernet Consortium25

50/25 Gigabit Ethernet

IEEE 802.3ba

40 Gigabit Ethernet

IEEE 802.3ae

10 Gigabit Ethernet

IEEE 802.3cb

2.5/5 Gigabit Ethernet
(For 2.5: support only 2.5 x1000BASE-X)

IEEE 802.3ap

Based on auto-negotiation and KR startup

IEEE 802.3ad
IEEE 802.1AX

Link Aggregation

IEEE 802.1Q
IEEE 802.1P VLAN tags and priority

IEEE 802.1Qau (QCN)
Congestion Notification
IEEE 802.1Qaz (ETS)
EEE 802.1Qbb (PFC)
IEEE 802.1Qbg
IEEE 1588v2
IEEE 802.1AE
Jumbo frame support (9.6KB)

On-board Memory

  • Quad SPI NOR FLASH - includes 256Mbit for Firmware image.

  • UVPS EEPROM - includes 1Mbit.

  • FRU EEPROM - Stores the parameters and personality of the card. The EEPROM capacity is 128Kbit. FRU I2C address is (0x50) and is accessible through the PCIe SMBus.

  • eMMC - x8 NAND flash (memory size might vary on different DPUs) for Arm boot, OS, and disk space.

  • DDR4 SDRAM - 16GB/32GB @3200MT/s single-channel DDR4 SDRAM memory. Solder down on board. 64bit + 8bit ECC.

BlueField-2 DPU

The NVIDIA BlueField-2 DPU integrates eight 64-bit Armv8 A72 cores interconnected by a coherent mesh network, one DRAM controller, and an RDMA intelligent network adapter supporting up to 200Gb/s, an embedded PCIe switch with endpoint and root complex functionality, and up to 16 lanes of PCIe Gen 3.0/4.0.

Overlay Networks

To better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. NVIDIA DPU effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.

RDMA and RDMA over Converged InfiniBand/VPI (RoCE)

NVIDIA DPU, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged InfiniBand/VPI) technology, delivers low-latency and high-performance over InfiniBand/VPI networks. Leveraging data center bridging (DCB) capabilities as well as advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

NVIDIA PeerDirect™

PeerDirect communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), significantly reducing application run time. NVIDIA DPU's advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

Quality of Service (QoS)

Support for port-based Quality of Service enabling various application requirements for latency and SLA.

Storage Acceleration

  • A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage RDMA for high-performance storage access: NVMe over Fabric offloads for the target machine

  • BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host, isolating part of the storage media from the host, or enabling abstraction of software-defined storage logic using the NVIDIA BlueField-2 Arm cores. On the storage initiator side, NVIDIA BlueField-2 DPU can prove an efficient solution for hyper-converged systems to enable the host CPU to focus on computing while all the storage interface is handled through the Arm cores.

NVMe-oF

Non-volatile Memory Express (NVMe) over Fabrics is a protocol for communicating block storage IO requests over RDMA to transfer data between a host computer and a target solid-state storage device or system over a network. NVIDIA BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host using its powerful NVMe over Fabrics Offload accelerator.

SR-IOV

NVIDIA DPU SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.

High-Performance Accelerations

  • Tag Matching and Rendezvous Offloads

  • Adaptive Routing on Reliable Transport

  • Burst Buffer Offloads for Background Checkpointing

GPU Direct

GPUDirect RDMA is a technology that provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the NVIDIA HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network. NVIDIA DPU uses high-speed DMA transfers to copy data between P2P devices resulting in more efficient system applications

Isolation

BlueField-2 DPU functions as a “computer-in-front-of-a-computer,” unlocking unlimited opportunities for custom security applications on its Arm processors, fully isolated from the host’s CPU. In the event of a compromised host, BlueField-2 may detect/block malicious activities in real-time and at wire speed to prevent the attack from spreading further.

Cryptography Accelerations

From IPsec and TLS data-in-motion inline encryption to AES-XTS block-level data-at-rest encryption and public key acceleration, BlueField-2 DPU hardware-based accelerations offload the crypto operations and free up the CPU, reducing latency and enabling scalable crypto solutions. BlueField-2 “host-unaware” solutions may transmit and receive data, while BlueField-2 acts as a bump-in-the-wire for crypto.

Securing Workloads

BlueField-2 DPU accelerates connection tracking with its ASAP2 technology to enable stateful filtering on a per-connection basis. Moreover, BlueField-2 includes a Titan IC regular expression (RXP) acceleration engine supported by IDS/IPS tools to detect host introspection and Application Recognition (AR) in real-time.

Security Accelerators

A consolidated compute and network solution based on DPU achieves significant advantages over a centralized security server solution. Standard encryption protocols and security applications can leverage NVIDIA BlueField-2 compute capabilities and network offloads for security application solutions such as Layer4 Stateful Firewall.

Virtualized Cloud

By leveraging BlueField-2 DPU virtualization offloads, data center administrators can benefit from better server utilization, allowing more virtual machines and more tenants on the same hardware while reducing the TCO and power consumption

Out-of-Band Management

The NVIDIA BlueField-2 DPU incorporates a 1GbE RT45 out-of-band port that allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It can also be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components.

BMC

Some DPUs incorporate local NIC BMC (Baseboard Management Controller) hardware on the board. The BMC SoC (system on a chip) can utilize either shared or dedicated NICs for remote access. The BMC node enables remote power cycling, board environment monitoring, BlueField-2 chip temperature monitoring, board power, and consumption monitoring, and individual interface resets. The BMC also supports the ability to push a boot stream to BlueField-2.
Having a trusted onboard BMC that is fully isolated for the host server ensures the highest security for the DPU boards.

PPS IN/OUT

NVIDIA offers a full IEEE 1588v2 PTP software solution, as well as time-sensitive related features called “5T”. NVIDIA PTP and 5T software solutions, are designed to meet the most demanding PTP profiles. BlueField-2 incorporates an integrated Hardware Clock (PHC) that allows BlueField-2 to achieve sub-20u Sec accuracy and offers many timings-related functions such as time-triggered scheduling or time-based SND accelerations. The PTP part supports the subordinate clock, master clock, and boundary clock.

BlueField-2 PTP solution allows you to run any PTP stack on your host and on the DPU arm.

The MMCX on the board allows customers to connect an RF cable without occupying space on the external bracket.

Enabled in MBF2M516C-EESOT and MBF2M516C-EECOT.

© Copyright 2023, NVIDIA. Last updated on Sep 8, 2023.