NVIDIA BlueField-2 BF2500 Ethernet DPU Controller User Manual
NVIDIA BlueField-2 BF2500 Ethernet DPU Controller User Manual

Introduction

This is the User Manual for NVIDIA® BlueField®-2 BF2500 DPU Controller . This document provides details of the product interfaces, specifications, required software and firmware for operating the board, and a step-by-step plan of how to bring up the BlueField-2 BF2500 DPU Controller.

Item

Description

Main-board PCI Express slot

x16 Gen 4.0 slot.

System Power Supply

Minimum 75W or greater system power supply for all cards.

These PCIe Gen 4.0 x16 DPU controllers require additional 75W through a supplementary 6-pin ATX power supply connector.

NOTE: The connector is not included in the package. It should be part of system wiring or it can be ordered separately as a system accessory.

Operating System

BlueField-2 DPU is shipped with Ubuntu – a Linux commercial operating system – which includes the NVIDIA OFED stack (MLNX_OFED), and is capable of running all customer-based Linux applications seamlessly. BlueField-2 DPU also supports CentOS and has an out-of-band 1GbE management interface. For more information, please refer to the DOCA SDK documentation or NVIDIA BlueField-2 Software User Manual.

Connectivity

  • Interoperable with 1/10/25/40/50/100/200 Gb/s Ethernet switches

  • Passive copper cable with ESD protection

  • Powered connectors for optical and active cable support

For detailed information, see Specifications.

Before installing your new system, unpack it and check against the below tables that all the parts have been sent. Check the parts for visible damage that may have occurred during shipping.

Important

If anything is damaged or missing, contact your reseller.

Card Package

Item

Description

Cards

1x BlueField-2 DPU Controller card with an assembled tall bracket

Accessories Kit

The accessories kit should be ordered separately. Earlier controller versions require the kit OPN MBF20-DKIT, while newer versions require kit OPN MBF25-DKIT.

Kit OPN

Contents

MBF20-DKIT

1x USB 2.0 Type A to mini-USB Type B cable

1x USB 2.0 Type A to 30pin Flat Socket

Important

These DPU controllers, you need a 6-pin ATX power supply connector cable to activate the card. The cable is not included in the package. For further details, please refer to External PCIe Power Supply Connector .


BlueField-2 BlueField-2 BF2500 DPU Controller features the second generation BlueField-2 data processing Unit (DPU) – an innovative and high-performance programmable networking engine. The DPU integrates an array of eight powerful 64-bit Arm v8 A72 cores interconnected by a coherent mesh with a DDR4 memory controller and a dual-port Ethernet network controller. Providing unmatched scalability and efficiency, NVIDIA BF2500 DPU Controller is the ideal adapter to accelerate the most demanding workloads in data center, cloud, service provider and storage environments.

Warning

The BlueField BF2500 DPU Controller should be installed only in a JBOF and JBOD Systems as it functions as a PCIe root-complex (RC) initiating PCIe bus operations. Installing it in a regular host system may damage the card.

Ideal Solution for JBOF and JBOD Systems

NVIDIA BlueField-2 DPU is a highly integrated and efficient controller, optimized for NVMe storage systems, Network Functions Virtualization (NFV), Cloud and Machine Learning workloads. BlueField-2 integrates all the discrete components of a storage system appliance into a single chip, including Arm core CPUs, PCIe switch and a network controller, making it the premier solution for building Just-a-Bunch-Of-Flash (JBOF) systems, All-Flash-Array and storage appliances for NVMe over Fabrics. With an integrated NVMe-oF offload accelerator, the BF2500 DPU Controller has a superior performance advantage over existing JBOF systems, significantly reducing storage transaction latency, while increasing IOPs (I/O operations per second).

Features and Benefits

This section describes hardware features and capabilities.

Warning

It is recommended to upgrade your BlueField product to the latest software and firmware versions available in order to enjoy the latest features and bug fixes.

Please refer to the software release notes for feature availability.

Feature

Description

PCI Express (PCIe)

Uses PCIe Gen 4.0 (16GT/s) through an x16 edge connector, compatible with Gen 3.0, 2.0 and 1.1.

Up to 200 Gigabit Ethernet

  • The adapters comply with the following IEEE 802.3 standards: 200Gb/s / 100Gb/s / 50Gb/s / 40Gb/s / 25Gb/s / 10Gb/s / 1Gb/s

  • IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet

  • IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet, supporting all FEC modes

  • IEEE 802.3ba 40 Gigabit Ethernet

  • IEEE 802.3by 25 Gigabit Ethernet

  • IEEE 802.3ae 10 Gigabit Ethernet

  • IEEE 802.3ap based auto-negotiation and KR startup

  • IEEE 802.3ad, 802.1AX Link Aggregation

  • IEEE 802.1Q, 802.1P VLAN tags and priority

  • IEEE 802.1Qau (QCN)

  • Congestion Notification

  • IEEE 802.1Qaz (ETS)

  • IEEE 802.1Qbb (PFC)

  • IEEE 802.1Qbg

  • IEEE 1588v2

  • Jumbo frame support (9.6KB)

On-board Memory

  • Quad SPI NOR FLASH - includes 256Mbit for Firmware image.

  • UVPS EEPROM - includes 1Mbit.

  • FRU EEPROM - Stores the parameters and personality of the card. The EEPROM capacity is 128Kbit. FRU I2C address is (0x50) and is accessible through the PCIe SMBus.

  • eMMC - x8 NAND flash

  • DDR4 SDRAM - 16GB @3200MT/s single-channel DDR4 SDRAM memory. Solder down on-board. 64bit + 8bit ECC.

BlueField-2 DPU

The BlueField-2 DPU integrates eight 64-bit Armv8 A72 cores interconnected by a coherent mesh network, one DRAM controller, an RDMA intelligent network adapter supporting up to 200Gb/s, an embedded PCIe switch with endpoint and root complex functionality, and up to 16 lanes of PCIe Gen 4.0.

Overlay Networks

In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. DPU effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.

RDMA and RDMA over Converged Ethernet (RoCE)

DPU, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.

NVIDIA PeerDirect

NVIDIA PeerDirect communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. DPU advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.

Quality of Service (QoS)

Support for port-based Quality of Service enabling various application requirements for latency and SLA.

Storage Acceleration

A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage RDMA for high-performance storage access.

  • NVMe over Fabric offloads for the target machine

  • T10-DIF Signature Handover

BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host, isolating part of the storage media from the host, or enabling abstraction of software-defined storage logic
using the BlueField-2 Arm cores. On the storage initiator side, BlueField-2 DPU can prove an efficient solution for hyper-converged systems to enable the host CPU to focus
on compute while all the storage interface is handled through the Arm cores.

NVMe-oF

Nonvolatile Memory Express (NVMe) over Fabrics is a protocol for communicating block storage IO requests over RDMA to transfer data between a host computer and a target solid-state storage device or system over a network. BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host using its powerful NVMe over Fabrics Offload accelerator.

SR-IOV

DPU SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.

GPU Direct

The latest advancement in GPU-GPU communications is GPUDirect RDMA. This new technology provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network. DPU uses high-speed DMA transfers to copy data between P2P devices resulting in more efficient system applications

Crypto

The BlueField-2 DPU crypto enabled versions include a BlueField-2 IC which supports accelerated cryptographic operations. In addition to specialized instructions for bulk cryptographic processing in the Arm cores, an offload hardware engine accelerates public-key cryptography and random number generation are enabled.

Security Accelerators

A consolidated compute and network solution based on DPU achieves significant advantages over a centralized security server solution. Standard encryption protocols and security applications can leverage BlueField-2 compute capabilities and network offloads for security application solutions such as Layer4 Stateful Firewall.

Out-of-Band Management

The BlueField-2 DPU incorporates a 1GbE RJ45 out-of-band port that allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It can also be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components.

© Copyright 2023, NVIDIA. Last updated on Sep 9, 2023.