DPF Performance Optimizations

HBN-OVN Performance Optimizations

This document specifies some recommended Performance Optimization steps for the HBN-OVN DPF Deployment:

OVN-K8s is deployed as the primary Kubernetes network, and it utilizes both the Management Network and the Highspeed Network. Therefore, in order to use 9K MTU on the Highspeed Network and achieve maximum performance, you must also set 9K MTU on the Management Network. Make sure to do so on the ports of your Management Switch and on all the Management Ports of the nodes connected to the Management Network (e.g. Firewall/Router Node, Master Nodes, Node serving the BFB image, etc.)

The following kernel parameters can be added to worker nodes to make sure that the the cores that belong to the same NUMA Node as the DPU are isolated and reserved for workload pods. Please substitute the values for isolcpus, nohz_full, and rcu_nocbs with the CPU cores from the NUMA node which the BlueField-3 is connected to:

Copy
Copied!
            

intel_iommu=on iommu=pt numa_balancing=disable processor.max_cstate=0 isolcpus=28-55,84-111 nohz_full=28-55,84-111 rcu_nocbs=28-55,84-111

The following parameters can be set in kubelet configuration file (typically at: /var/lib/kubelet/config.yaml) for using the "single-numa-node" Topology Manager Policy in which Kubernetes assigns to workload pods CPU cores only from the NUMA Node that the DPU is connected to.

Add the following lines to the file:

Copy
Copied!
            

cpuManagerPolicy: static topologyManagerPolicy: single-numa-node

Add the following line to the OVN-K8s CNI helm values YAML (manifests/01-cni-installation/helm-values/ovn-kubernetes.yml):

Copy
Copied!
            

mtu: 8940

For example:

Copy
Copied!
            

commonManifests: enabled: true nodeWithoutDPUManifests: enabled: true controlPlaneManifests: enabled: true nodeWithDPUManifests: enabled: true nodeMgmtPortNetdev: $DPU_P0_VF1 dpuServiceAccountNamespace: dpf-operator-system gatewayOpts: --gateway-interface=$DPU_P0 ## Note this CIDR is followed by a trailing /24 which informs OVN Kubernetes on how to split the CIDR per node. podNetwork: $POD_CIDR/24 serviceNetwork: $SERVICE_CIDR k8sAPIServer: https://$TARGETCLUSTER_API_SERVER_HOST:$TARGETCLUSTER_API_SERVER_PORT mtu: 8940

Add the controlPlaneMTU and the highSpeedMTU parameters to the OperatorConfig YAML, for example:

Copy
Copied!
            

--- apiVersion: operator.dpu.nvidia.com/v1alpha1 kind: DPFOperatorConfig metadata: name: dpfoperatorconfig namespace: dpf-operator-system spec: overrides: kubernetesAPIServerVIP: $TARGETCLUSTER_API_SERVER_HOST kubernetesAPIServerPort: $TARGETCLUSTER_API_SERVER_PORT provisioningController: bfbPVCName: "bfb-pvc" dmsTimeout: 900 kamajiClusterManager: disable: false networking: controlPlaneMTU: 9000 highSpeedMTU: 9000

Please add the following lines to the OVS commands script section in the DPUFlavor YAML:

Copy
Copied!
            

_ovs-vsctl set Interface p0 mtu_request=9216 _ovs-vsctl set Interface p1 mtu_request=9216 _ovs-vsctl set Interface br-ovn mtu_request=9216 _ovs-vsctl set Interface pf0hpf mtu_request=9216

For example:

Copy
Copied!
            

_ovs-vsctl set Open_vSwitch . other_config:doca-init=true _ovs-vsctl set Open_vSwitch . other_config:dpdk-max-memzones=50000 _ovs-vsctl set Open_vSwitch . other_config:hw-offload=true _ovs-vsctl set Open_vSwitch . other_config:pmd-quiet-idle=true _ovs-vsctl set Open_vSwitch . other_config:max-idle=20000 _ovs-vsctl set Open_vSwitch . other_config:max-revalidator=5000 _ovs-vsctl --if-exists del-br ovsbr1 _ovs-vsctl --if-exists del-br ovsbr2 _ovs-vsctl --may-exist add-br br-sfc _ovs-vsctl set bridge br-sfc datapath_type=netdev _ovs-vsctl set bridge br-sfc fail_mode=secure _ovs-vsctl --may-exist add-port br-sfc p0 _ovs-vsctl set Interface p0 type=dpdk _ovs-vsctl set Interface p0 mtu_request=9216 _ovs-vsctl set Port p0 external_ids:dpf-type=physical _ovs-vsctl --may-exist add-port br-sfc p1 _ovs-vsctl set Interface p1 type=dpdk _ovs-vsctl set Interface p1 mtu_request=9216 _ovs-vsctl set Port p1 external_ids:dpf-type=physical   _ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-datapath-type=netdev _ovs-vsctl --may-exist add-br br-ovn _ovs-vsctl set bridge br-ovn datapath_type=netdev _ovs-vsctl set Interface br-ovn mtu_request=9216 _ovs-vsctl --may-exist add-port br-ovn pf0hpf _ovs-vsctl set Interface pf0hpf type=dpdk _ovs-vsctl set Interface pf0hpf mtu_request=9216

Please increase the hugepages allocation in the DPUFlavor YAML:

Copy
Copied!
            

- hugepages=8072

Please add the following line to the OVN ServiceConfig YAML (manifests/05-dpudeployment-installation/dpuserviceconfig_ovn.yaml): mtu: 8940

For example:

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceConfiguration metadata: name: ovn namespace: dpf-operator-system spec: deploymentServiceName: "ovn" serviceConfiguration: helmChart: values: k8sAPIServer: https://$TARGETCLUSTER_API_SERVER_HOST:$TARGETCLUSTER_API_SERVER_PORT podNetwork: $POD_CIDR/24 serviceNetwork: $SERVICE_CIDR mtu: 8940 dpuManifests: kubernetesSecretName: "ovn-dpu" # user needs to populate based on DPUServiceCredentialRequest vtepCIDR: "10.0.120.0/22" # user needs to populate based on DPUServiceIPAM hostCIDR: $TARGETCLUSTER_NODE_CIDR # user needs to populate ipamPool: "pool1" # user needs to populate based on DPUServiceIPAM ipamPoolType: "cidrpool" # user needs to populate based on DPUServiceIPAM ipamVTEPIPIndex: 0 ipamPFIPIndex: 1

© Copyright 2025, NVIDIA. Last updated on Sep 3, 2025.