What can I help you with?
DOCA Platform Framework (DPF) Documentation v25.4

DPF System Prerequisites

DPF makes a number of assumptions about the hardware, software and networking of the machines it runs on. Some of the specific user guides add their own requirements.

There is a high availability control plane machines serving many worker nodes in a cluster running DPF.

Control plane machines

Each control plane machine: - May be virtualized - x86_64 architecture - 16 GB RAM - 8 CPUs - DPUs are not installed

Worker machines

Each worker machine: - Bare metal - no virtualization - x86_64 architecture - 16 GB RAM - 8 CPUs - Exactly one DPU

DPUs

  • Bluefield 3

  • 32 GB memory

  • Flashed with NVIDIA BFB with DOCA version 2.5 or higher

  • out-of-band management port is not used

Control plane machines

  • NFS client packages - i.e. nfs-common

  • NFS server available with /mnt/dpf_share readable and writable by any user

Worker machines

  • In-Band Manageability Interface enabled in BIOS

  • NFS client packages - i.e. nfs-common

  • NFS server available with /mnt/dpf_share readable and writable by any user

  • rshim package is not installed

Kubernetes

  • Kubernetes 1.30

  • Control plane nodes have the labels "node-role.kubernetes.io/control-plane" : ""

  • All nodes have full internet access - both from the host out-of-band and DPU high speed interfaces.

  • Virtual IP from the management subnet reserved for internal DPF usage.

  • The out-of-band management and high-speed networks are routable to each other.

  • The control plane nodes hosting the DPU control plane pods must be located on the same L2 broadcast domain.

  • The out-of-band management fabric on which control plane nodes are connected should allow MultiCast traffic (used for VRRP).

© Copyright 2025, NVIDIA. Last updated on May 20, 2025.