This is the User Manual for Technologies BlueField®-2 BF2500 DPU Controller. This document provides details as to the interfaces of the card, specifications, required software and firmware for operating the board, and a step-by-step plan of how to bring up the BlueField-2 BF2500 DPU Controller.
System Requirements Overview
|Motherboard||PCI Express 3.0/4.0-compliant motherboard with x16 interface|
|System Power Supply|
|Operating System||BlueField-2 DPU Controller can be shipped with Ubuntu – a Linux commercial operating system – which includes the OFED stack, and is capable of running all customer-based Linux applications seamlessly. The cards also support CentOS and have an out-of-band 1GbE management interface. For more information, please refer to the BlueField-2 Software User Manual.|
Before installing your new system, unpack it and check against the below tables that all the parts have been sent. Check the parts for visible damage that may have occurred during shipping.
If anything is damaged or missing, contact your sales representative at support@.com.
|Cards||1x BlueField-2 DPU|
|Accessories||1x tall bracket (shipped assembled on the card)|
The accessories kit should be ordered separately. OPN: MBF20-DKIT.
|Cables||1x USB 2.0 Type A to Mini USB Type B cable|
|1x USB 2.0 Type A to 30pin Flat Socket|
For FHHL 100Gb/s P-Series DPUs, you need a 6-pin PCIe external power cable to activate the card. The cable is not included in the package. For further details, please refer to External PCIe Power Supply Connector.
BlueField-2 BF2500 DPU Controller features the second generation BlueField-2 I/O Processing Unit (IPU) – an innovative and high-performance programmable networking engine. The IPU integrates an array of eight powerful 64-bit Armv8 A72 cores interconnected by a coherent mesh with a DDR4 memory controller and a dual-port Ethernet network controller. Providing unmatched scalability and efficiency, the BF2500 DPU Controller is the ideal adapter to accelerate the most demanding workloads in datacenter, cloud, service provider and storage environments.
The BlueField BF2500 DPU Controller should be installed only in a JBOF and JBOD Systems as it functions as a PCIe root-complex (RC) initiating PCIe bus operations. Installing it in a regular host system may damage the card.
Ideal Solution for JBOF and JBOD Systems
BlueField-2 I/O Processing Unit (IPU) is a highly integrated and efficient controller, optimized for NVMe storage systems, Network Functions Virtualization (NFV), Cloud, and Machine Learning workloads. BlueField-2 integrates all the discrete components of a storage system appliance into a single chip, including Arm core CPUs, PCIe switch, and a network controller, making it the premier solution for building Just-a-Bunch-Of-Flash (JBOF) systems, All-Flash-Array, and storage appliances for NVMe over Fabrics. With an integrated NVMe-oF offload accelerator, the BF2500 DPU Controller has a superior performance advantage over existing JBOF systems, significantly reducing storage transaction latency, while increasing IOPs (I/O operations per second).
BlueField-2 DPU Controllers
|Network Connector Type|
|Form Factor||PCIe Full Height, Half Length|
Dimensions: 168mm x 111mm (6.61in x 4.37in)
|Heatsink Height||Single Slot|
Height: 14.91mm (from PCB surface)
PCIe Gen 3.0 / 4.0 SERDES @ 8.0GT/s / 16.0GT/s
PCIe Gen 4.0 SERDES @ 16.0GT/s
|On-board DDR4 Memory|
Single-channel with 8 DDR4 8 bit + ECC (64bit + 8bit ECC)
Single-channel with 8 DDR4 8 bit + ECC (64bit + 8bit ECC)
|1GbE OOB Management||√||√||√||√|
BlueField-2 P-Series 8 cores (high-bin)
a. Note: Refer to BlueField-2 Software and Firmware release notes for the availability of PCIe Gen 4.0 capabilities.
For more detailed information see Specifications.
Features and Benefits
This section describes hardware features and capabilities. Please refer to the software release notes for feature availability.
It is recommended to upgrade your BlueField product to the latest software and firmware versions available in order to enjoy the latest features and bug fixes.
|PCI Express (PCIe)a|
PCIe Gen 3.0 (8GT/s) and Gen 4.0 (16GT/s) through an x16 edge connector. Gen 1.1 and 2.0 compatible
|200Gb/s DPU Controller|
BlueField-2 offers the highest throughput DPU Controller, supporting HDR 200b/s InfiniBand and 200Gb/s Ethernet and enabling any standard networking, clustering, or storage to operate seamlessly over any converged network leveraging a consolidated software stack.
|InfiniBand Architecture Specification v1.3 compliant|
BlueField-2 DPU Controller delivers low latency, high bandwidth, and computing efficiency for performance-driven server and storage clustering applications. BlueField-2 DPU Controller is InfiniBand Architecture Specification v1.3 compliant.
|Up to 200 Gigabit Ethernet||DPU Controllers comply with the following IEEE 802.3 standards:|
200GbE / 100GbE / 50GbE
- IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
- IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet, supporting all FEC modes
- IEEE 802.3ba 40 Gigabit Ethernet
- IEEE 802.3by 25 Gigabit Ethernet
- IEEE 802.3ae 10 Gigabit Ethernet
- IEEE 802.3ap based auto-negotiation and KR startup
- IEEE 802.3ad, 802.1AX Link Aggregation
- IEEE 802.1Q, 802.1P VLAN tags and priority
- IEEE 802.1Qau (QCN)
- Congestion Notification
- IEEE 802.1Qaz (ETS)
- IEEE 802.1Qbb (PFC)
- IEEE 802.1Qbg
- IEEE 1588v2
- Jumbo frame support (9.6KB)
• DDR4 16GB on-board, 3200Mt/s
|BlueField-2 IPU||The BlueField-2 IPU integrates eight 64-bit Armv8 A72 cores interconnected by a coherent mesh network, one DRAM controller, an RDMA intelligent network adapter supporting up to 200Gb/s, an embedded PCIe switch with endpoint and root complex functionality, and up to 16 lanes of PCIe Gen 3.0/4.0.|
|Overlay Networks||In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. BlueField BF2500 DPU Controller effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.|
|RDMA and RDMA over Converged Ethernet (RoCE)||BF2500 card, utilizing RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.|
|PeerDirect®||PeerDirect communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. BlueField BF2500 DPU Controller advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.|
|Quality of Service (QoS)||Support for port-based Quality of Service enabling various application requirements for latency and SLA.|
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage RDMA for high-performance storage access.
• NVMe over Fabric offloads for the target machine
BlueField-2 SmartNIC may operate as a co-processor offloading specific storage tasks from the host, isolating part of the storage media from the host, or enabling abstraction of software-defined storage logic
|NVMe-oF||Nonvolatile Memory Express (NVMe) over Fabrics is a protocol for communicating block storage IO requests over RDMA to transfer data between a host computer and a target solid-state storage device or system over a network. BlueField-2 DPU Controller may operate as a co-processor offloading specific storage tasks from the host using its powerful NVMe over Fabrics Offload accelerator.|
|SR-IOV||DPU Controller SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.|
The latest advancement in GPU-GPU communications is GPUDirect RDMA. This new technology provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network. BlueField BF2500 DPU Controller uses high-speed DMA transfers to copy data between P2P devices resulting in more efficient system applications.
|Crypto||The BlueField-2 DPU Controller crypto enabled versions include a BlueField-2 IC which supports accelerated cryptographic operations. In addition to specialized instructions for bulk cryptographic processing in the Arm cores, an offload hardware engine accelerates public-key cryptography and random number generation are enabled.|
A consolidated compute and network solution based on BlueField DPU Controller achieves significant advantages over a centralized security server solution. Standard encryption protocols and security applications can leverage BlueField-2 compute capabilities and network offloads for security application solutions such as:
|Out-of-Band Management||The BlueField-2 DPU Controller incorporates a 1GbE RJ45 out-of-band port that allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It can also be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components.|
a. Refer to BlueField Software and Firmware release notes for the availability of PCIe Gen 4.0 capabilities