NVIDIA BlueField-3 Networking Platform User Guide

PCIe Bifurcation Configuration Options

Note

PCIe bifurcation is supported starting from DOCA 2.5 with BlueField BSP 4.5.0 (released December 2023).

Note

This section applies to the following OPNs:

B3220 DPUs: 900-9D3B6-00CV-AA0 and 900-9D3B6-00SV-AA0

B3240 DPUs: 900-9D3B6-00CN-AB0 and 900-9D3B6-00SN-AB0

B3210 DPUs: 900-9D3B6-00CC-AA0 and 900-9D3B6-00SC-AA0

B3210E DPUs: 900-9D3B6-00CC-EA0 and 900-9D3B6-00SC-EA0

NVIDIA BlueField-3 DPUs provide a range of configuration scenarios to meet the demands of environments and deployments. This section describes the various connectivity options for peripherals on the PCIe, including scenarios where the BlueField-3 DPU acts as the PCIe switch with NVMe SSDs as PCIe endpoints. While this list of scenarios is not exhaustive, it highlights the tested and verified options. Customers seeking to support unlisted configurations should contact NVIDIA Support.

The BlueField-3 DPU exposes two x16 PCIe interfaces, with internal PCIe switch architecture. The first interface is exposed via the x16 PCIe Gen 5.0/4.0 Goldfinger connector and serves as an endpoint to the host server by default. The additional PCIe x16 interface is exposed through the Cabline CA-II Plus connector, featuring programmable bifurcation as a downstream port. The following figure demonstrates the BlueField-3 DPU block diagram with the PCIe interfaces.

BlueField-3 DPU Block Diagram with PCIe Interfaces

image-2023-12-13_13-13-14-version-1-modificationdate-1702552819303-api-v2.png

The various configuration scenarios listed in this section include a diagram and instructions on how to bifurcate the PCIe interface using the mlxconfig tool. For more information on the mlxconfig tool, please refer to mlxconfig – Changing Device Configuration Tool.

Warning

Before setting the desired configuration, take note of the following warnings:

  • Any customer-set configuration overrides the previous configuration values.

  • Misconfiguration may harm the system.

  • It is recommended to establish out-of-band connectivity to the BlueField DPU Arm OS before setting any of these configurations for the first time. This enables you to reset the NVConfig parameters to their default values in case of misconfiguration.

The following table summarizes the available configuration scenarios.

Configuration

Root Port for Down Stream Port (DSP) devices:

DPU ARM / Host

DPU PCIe Goldfingers Bifurcation

DPU PCIe Auxiliary Connection Bifurcation

Default

N/A

1 x Gen 5.0/4.0 x16 PCIe lanes as upstream port

1 x Gen 5.0/4.0 x16 PCIe lanes as upstream ports

Host as Root Port on Peripherals

Host

1 x Gen 5.0/4.0 x16 PCIe lanes as upstream port

4 x Gen 5.0/4.0 x4 PCIe lanes as downstream ports

DPU Arms as Root Port on Peripherals

DPU ARMs

1 x Gen 5.0/4.0 x16 PCIe lanes as upstream port

4 x Gen 5.0/4.0 x4 PCIe lanes as downstream ports

In this scenario, the x16 PCIe Goldfingers of the BlueField-3 DPU serve as endpoints to the host server (default), while the additional x16 PCIe lanes are accessible via the Cabline CA-II Plus connector, bifurcated into four PCIe links, where each link comprises x4 PCIe lanes.

In this configuration the host server assumes the role of the Root Port for downstream devices connected to the Cabline CA-II Plus connector. These downstream devices are exposed to the host server on its PCIe via the internal BlueField-3 DPU PCIe switch.

As seen in the below visual representation of this configuration, the host functions as the Root Port, branching into four PCIe links on the Cabline CA-II Plus connector, with each link featuring a bifurcation of x4 PCIe lanes.

image-2023-12-13_13-17-33-version-1-modificationdate-1702552819027-api-v2.png
Note

Important Notes:

  • mlxconfig can be configured either through the host in NIC Mode and DPU Mode, or directly from the DPU’s Arm running OS.

  • This configuration is persistent even following resets and NIC firmware updates.

The required set of configurations to implement this bifurcation is outlined below.

Copy
Copied!
            

mlxconfig -d <device> s PCI_SWITCH0_UPSTRAEM_PORT_BUS=0  mlxconfig -d <device> s PCI_SWITCH0_UPSTRAEM_PORT_PEX=0  mlxconfig -d <device> s PCI_BUS00_HIERARCHY_TYPE=1 mlxconfig -d <device> s PCI_BUS00_WIDTH=5 mlxconfig -d <device> s PCI_BUS00_SPEED=4  mlxconfig -d <device> s PCI_BUS10_HIERARCHY_TYPE=1  mlxconfig -d <device> s PCI_BUS10_WIDTH=3 mlxconfig -d <device> s PCI_BUS10_SPEED=4 mlxconfig -d <device> s PCI_BUS12_HIERARCHY_TYPE=1  mlxconfig -d <device> s PCI_BUS12_WIDTH=3 mlxconfig -d <device> s PCI_BUS12_SPEED=4 mlxconfig -d <device> s PCI_BUS14_HIERARCHY_TYPE=1  mlxconfig -d <device> s PCI_BUS14_WIDTH=3 mlxconfig -d <device> s PCI_BUS14_SPEED=4 mlxconfig -d <device> s PCI_BUS16_HIERARCHY_TYPE=1  mlxconfig -d <device> s PCI_BUS16_WIDTH=3 mlxconfig -d <device> s PCI_BUS16_SPEED=4

In this scenario, the x16 PCIe Gold Fingers of the BlueField-3 DPU serve as an endpoint to the host server (default), while the additional x16 PCIe lanes are accessible via the Cabline CA-II Plus connector, bifurcated into four PCIe links, where each link comprises x4 PCIe lanes.

In this configuration the DPU’s Arm cores function as Root Port of the downstream devices connected to the Cabline CA-II Plus connector, and these remain unexposed to the host server on its PCIe.

As seen in the below visual representation of this configuration, the DPU ARMs operate as the Root Port, with bifurcation into four PCIe links on the Cabline CA-II Plus connector, where each link incorporates x4 PCIe lanes.

image-2023-12-13_13-19-51-version-1-modificationdate-1702552818490-api-v2.png

Note

Important Notes:

  • mlxconfig can be configured either through the host in NIC Mode and DPU Mode, or directly from the DPU’s Arm running OS.

  • This configuration is persistent even following resets and NIC firmware updates.

The required set of configurations to implement this bifurcation is outlined below:

Copy
Copied!
            

mlxconfig -d <device> s PCI_BUS00_HIERARCHY_TYPE=0 mlxconfig -d <device> s PCI_BUS00_WIDTH=5 mlxconfig -d <device> s PCI_BUS00_SPEED=4 mlxconfig -d <device> s PCI_BUS10_HIERARCHY_TYPE=2  mlxconfig -d <device> s PCI_BUS10_WIDTH=3 mlxconfig -d <device> s PCI_BUS10_SPEED=4 mlxconfig -d <device> s PCI_BUS12_HIERARCHY_TYPE=2 mlxconfig -d <device> s PCI_BUS12_WIDTH=3 mlxconfig -d <device> s PCI_BUS12_SPEED=4 mlxconfig -d <device> s PCI_BUS14_HIERARCHY_TYPE=2  mlxconfig -d <device> s PCI_BUS14_WIDTH=3 mlxconfig -d <device> s PCI_BUS14_SPEED=4 mlxconfig -d <device> s PCI_BUS16_HIERARCHY_TYPE=2  mlxconfig -d <device> s PCI_BUS16_WIDTH=3 mlxconfig -d <device> s PCI_BUS16_SPEED=4

© Copyright 2024, NVIDIA. Last updated on Dec 18, 2023.