Introduction

The NVIDIA® BlueField®-3 networking platform is designed to accelerate data center infrastructure workloads and usher in the era of accelerated computing and AI. Supporting both Ethernet and InfiniBand connectivity, BlueField-3 offers speeds up to 400 gigabits per second (Gb/s). It combines powerful computing with software-defined hardware accelerators for networking, storage, and cybersecurity—all fully programmable through the NVIDIA DOCA™ software framework. Drawing on the platform’s robust capabilities, BlueField data processing units (DPUs) and BlueField SuperNICs revolutionize traditional computing environments, transforming them into secure, high-performance, efficient, and sustainable data centers suitable for
any workload at any scale.

The BlueField-3 DPU is a cloud infrastructure processor that empowers organizations to build software-defined, hardware-accelerated data centers from the cloud to the edge. BlueField-3 DPUs offload, accelerate, and isolate software-defined networking, storage, security, and management functions, significantly enhancing data center performance, efficiency, and security. By decoupling data center infrastructure from business applications, BlueField-3 creates a secure, zero-trust data center infrastructure, streamlines operations, and reduces the total cost of ownership.

The BlueField-3 SuperNIC is a novel class of network accelerator that’s purpose-built for supercharging hyperscale AI workloads. Designed for network-intensive, massively parallel computing, the BlueField-3 SuperNIC provides best-in-class remote direct-memory access over converged Ethernet (RoCE) network connectivity between GPU servers at up to 400Gb/s, optimizing peak AI workload efficiency. For modern AI clouds, the BlueField-3 SuperNIC enables secure multi-tenancy while ensuring deterministic performance and performance isolation between tenant jobs.

Item

Description

PCI Express slot

In PCIe x16 Configuration

PCIe Gen 5.0 (32GT/s) through x16 edge connector.

In PCIe x16 Extension Option - Switch DSP (Data Stream Port)

  • PCIe Gen 5.0 SERDES @32GT/s through edge connector

  • PCIe Gen 5.0 SERDES @32GT/s through PCIe Auxiliary Connection Card

System Power Supply

Minimum 75W or greater system power supply for all cards.

B3240, B3220, B3210 and B3210E DPUs require a supplementary 8-pin ATX power supply connectivity through the external power supply connector.

Warning

NOTE: The power supply harness is not included in the package.

Important

Refer to the Hardware Installation and PCIe Bifurcation for important notes and warnings on powering up and down the card.

Operating System

BlueField-3 platforms is shipped with Ubuntu – a Linux commercial operating system – which includes the NVIDIA OFED stack (MLNX_OFED), and is capable of running all customer-based Linux applications seamlessly. For more information, please refer to the DOCA SDK documentation or NVIDIA BlueField DPU BSP.

Connectivity

  • Interoperable with 1/10/25/40/50/100/200/400 Gb/s Ethernet switches and SDR/FDR/EDR/HDR100/HDR/NDR200/NDR InfiniBand switches

  • Passive copper cable with ESD protection

  • Powered connectors for optical and active cable support

For detailed information, see Specifications.

Prior to unpacking your product, it is important to make sure your server meets all the system requirements listed above for a smooth installation. Be sure to inspect each piece of equipment shipped in the packing box. If anything is missing or damaged, contact your reseller.

Card Package

Important

For B3240, B3220 and B3210E DPUs, you need an 8-pin PCIe external power cable to activate the card. The cable is not included in the package. For further details, please refer to External PCIe Power Supply Connector.

Item

Description

Card

1x BlueField-3 platform

Accessories

1x tall bracket (shipped assembled on the card)


Accessories Kit

Warning

This is an optional accessories kit used for debugging purposes and can be ordered separately.

Kit OPN

Contents

MBF35-DKIT

4-pin USB to female USB Type-A cable

20-pin shrouded connector to USB Type-A cable


Optional PCIe Auxiliary Card Package

Important

The Socket-Direct functionality is currently not supported by firmware. Please approach your sales representatives.

Warning

This is an optional kit which applies to following OPNs:

  • B3220: 900-9D3B6-00CV-AA0 and 900-9D3B6-00SV-AA0

  • B3240: 900-9D3B6-00CN-AB0 and 900-9D3B6-00SN-AB0

  • B3210: 900-9D3B6-00CC-AA0 and 900-9D3B6-00SC-AA0

  • B3210E: 900-9D3B6-00CC-EA0 and 900-9D3B6-00SC-EA0

The PCIe auxiliary kit is purchased separately to utilize the Socket-Direct functionality in dual-socket servers or for downstream port extension option. For package contents and more information, refer to PCIe Auxiliary Card Kit.

Warning

This section describes hardware features and capabilities. Please refer to the relevant driver and/or firmware release notes for feature availability.

Feature

Description

InfiniBand Architecture Specification v1.5 compliant

BlueField-3 platforms deliver low latency, high bandwidth, and computing efficiency for high-performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data centers applications.

BlueField-3 platforms are InfiniBand Architecture Specification v1.5 compliant.

InfiniBand Network Protocols and Rates:

Protocol

Standard

Rate (Gb/s)

Comments

4x Port

(4 Lanes)

2x Ports

(2 Lanes)

NDR/NDR200

IBTA Vol2 1.5

425

212.5

PAM4 256b/257b encoding and RS-FEC

HDR/HDR100

IBTA Vol2 1.4

212.5

106.25

PAM4 256b/257b encoding and RS-FEC

EDR

IBTA Vol2 1.3.1

103.125

51.5625

NRZ 64b/66b encoding

FDR

IBTA Vol2 1.2

56.25

N/A

NRZ 64b/66b encoding

Up to 400 Gigabit Ethernet

BlueField-3 platforms comply with the following IEEE 802.3 standards:

400GbE / 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE

Protocol

MAC Rate

IEEE802.3ck

400/200/100 Gigabit Ethernet

(Include ETC enhancement)

IEEE802.3cd

IEEE802.3bs

IEEE802.3cm

IEEE802.3cn

IEEE802.3cu

400/200/100 Gigabit Ethernet

(Include ETC enhancement)

IEEE 802.3bj

IEEE 802.3bm

100 Gigabit Ethernet

IEEE 802.3by

Ethernet Consortium25

50/25 Gigabit Ethernet

IEEE 802.3ba

40 Gigabit Ethernet

IEEE 802.3ae

10 Gigabit Ethernet

IEEE 802.3cb

2.5/5 Gigabit Ethernet

(For 2.5: support only 2.5 x1000BASE-X)

IEEE 802.3ap

Based on auto-negotiation and KR startup

IEEE 802.3ad

IEEE 802.1AX

Link Aggregation

IEEE 802.1Q

IEEE 802.1P VLAN tags and priority

IEEE 802.1Qau (QCN)

Congestion Notification

IEEE 802.1Qaz (ETS)

EEE 802.1Qbb (PFC)

IEEE 802.1Qbg

IEEE 1588v2

IEEE 802.1AE (MACSec)

Jumbo frame support (9.6KB)

On-board Memory

  • Quad SPI NOR FLASH - includes 256Mbit for Firmware image.

  • UVPS EEPROM - includes 2Mbit.

  • FRU EEPROM - Stores the parameters and personality of the card. The EEPROM capacity is 128Kbit. FRU I2C address is (0x50) and is accessible through the PCIe SMBus.

  • DPU_BMC Flashes:

    • 2x 64MByte for BMC Image

    • 512MByte for Config Data

  • eMMC pSLC 40GB with 30K Write Cycles eMMC for SoC BIOS.

  • SSD (onboard BGA) 128GByte for user SoC OS, logs and application SW.

  • DDR5 SDRAM - 16GB/32GB @5200MT/s or @5600MT/s single/dual-channel DDR5 SDRAM memory. Solder down on-board. 128bit + 16 bit ECC

BlueField-3 IC

The BlueField-3 platforms integrate x8 / x16 Armv8.2+ A78 Hercules cores (64-bit) is interconnected by a coherent mesh network, one DRAM controller, an RDMA intelligent network adapter supporting up to 400Gb/s, an embedded PCIe switch with endpoint and root complex functionality, and up to 32 lanes of PCIe Gen 5.0.

Overlay Networks

In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. BlueField-3 platforms effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.

RDMA and RDMA over Converged InfiniBand/Ethernet (RoCE)

Utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged InfiniBand/Ethernet) technology, the BlueField-3 platforms deliver low-latency and high-performance over InfiniBand/Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

Quality of Service (QoS)

Support for port-based Quality of Service enabling various application requirements for latency and SLA.

Storage Acceleration

  • A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage RDMA for high-performance storage access: NVMe over Fabric offloads for the target machine

  • BlueField-3 cards may operate as a co-processor offloading specific storage tasks from the host, isolating part of the storage media from the host, or enabling abstraction of software-defined storage logic using the NVIDIA BlueField-3 Arm cores. On the storage initiator side, BlueField-3 networking platforms can prove an efficient solution for hyper-converged systems to enable the host CPU to focus on compute while all the storage interface is handled through the Arm cores.

NVMe-oF

Non-volatile Memory Express (NVMe) over Fabrics is a protocol for communicating block storage IO requests over RDMA to transfer data between a host computer and a target solid-state storage device or system over a network. BlueField-3 platforms may operate as a co-processor offloading specific storage tasks from the host using its powerful NVMe over Fabrics Offload accelerator.

SR-IOV

The SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.

High-Performance Accelerations

  • Tag Matching and Rendezvous Offloads

  • Adaptive Routing on Reliable Transport

  • Burst Buffer Offloads for Background Checkpointing

GPU Direct

GPUDirect RDMA is a technology that provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the NVIDIA HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network. BlueField-3 platforms use high-speed DMA transfers to copy data between P2P devices resulting in more efficient system applications

Isolation

BlueField-3 platforms function as a “computer-in-front-of-a-computer,” unlocking unlimited opportunities for custom security applications on its Arm processors, fully isolated from the host’s CPU. In the event of a compromised host, BlueField-3 may detect/block malicious activities in real-time and at wire speed to prevent the attack from spreading further.

Cryptography Accelerations

From IPsec and TLS data-in-motion inline encryption to AES-XTS block-level data-at-rest encryption and public key acceleration, BlueField-3 hardware-based accelerations offload the crypto operations and free up the CPU, reducing latency and enabling scalable crypto solutions. BlueField-3 “host-unaware” solutions may transmit and receive data, while BlueField-3 acts as a bump-in-the-wire for crypto.

Security Accelerators

A consolidated compute and network solution based on BlueField-3 achieves significant advantages over a centralized security server solution. Standard encryption protocols and security applications can leverage BlueField-3 compute capabilities and network offloads for security application solutions such as Layer4 Statefull Firewall.

Virtualized Cloud

By leveraging BlueField-3 virtualization offloads, data center administrators can benefit from better server utilization, allowing more virtual machines and more tenants on the same hardware, while reducing the TCO and power consumption

Out-of-Band Management

The BlueField-3 platforms incorporate a 1GbE RJ45 out-of-band port that allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It can also be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components.

BMC

Some BlueField-3 platforms incorporate local NIC BMC (Baseboard Management Controller) hardware on the board. The BMC SoC (system on a chip) can utilize either shared or dedicated NICs for remote access. The BMC node enables remote power cycling, board environment monitoring, BlueField-3 chip temperature monitoring, board power and consumption monitoring, and individual interface resets. The BMC also supports the ability to push a bootstream to BlueField-3.

Having a trusted on-board BMC that is fully isolated for the host server ensures highest security for the BlueField-3 platforms.

© Copyright 2024, NVIDIA. Last updated on Mar 27, 2024.