NVIDIA Network Operator v25.7.0

Deploy SR-IOV InfiniBand Network with RDMA

Step 1: Create NicClusterPolicy for InfiniBand

Copy
Copied!
            

apiVersion: mellanox.com/v1alpha1 kind: NicClusterPolicy metadata: name: nic-cluster-policy spec: ofedDriver: image: doca-driver repository: nvcr.io/nvidia/mellanox version: doca3.1.0-25.07-0.9.7.0-0 nvIpam: image: nvidia-k8s-ipam repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 imagePullSecrets: [] enableWebhook: false secondaryNetwork: cniPlugins: image: plugins repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 multus: image: multus-cni repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0

Copy
Copied!
            

kubectl apply -f nicclusterpolicy.yaml

Step 2: Create IPPool for nv-ipam

Copy
Copied!
            

apiVersion: nv-ipam.nvidia.com/v1alpha1 kind: IPPool metadata: name: sriov-ib-pool namespace: nvidia-network-operator spec: subnet: 192.168.6.0/24 perNodeBlockSize: 50 gateway: 192.168.6.1

Copy
Copied!
            

kubectl apply -f ippool.yaml

Step 3: Configure SR-IOV for InfiniBand

Copy
Copied!
            

apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: infiniband-sriov namespace: nvidia-network-operator spec: deviceType: netdevice mtu: 1500 nodeSelector: feature.node.kubernetes.io/pci-15b3.present: "true" nicSelector: vendor: "15b3" linkType: IB isRdma: true numVfs: 8 priority: 90 resourceName: mlnxnics

Copy
Copied!
            

kubectl apply -f sriovnetworknodepolicy.yaml

Step 4: Create SriovIBNetwork

Copy
Copied!
            

apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: sriov-ib-network namespace: nvidia-network-operator spec: ipam: | { "type": "nv-ipam", "poolName": "sriov-ib-pool" } resourceName: mlnxnics linkState: enable networkNamespace: default

Copy
Copied!
            

kubectl apply -f sriovibnetwork.yaml

Step 5: Deploy test workload

Copy
Copied!
            

apiVersion: v1 kind: Pod metadata: name: sriov-ib-test-pod annotations: k8s.v1.cni.cncf.io/networks: sriov-ib-network spec: containers: - name: test-container image: mellanox/rping-test command: ["/bin/bash", "-c", "sleepinfinity"] securityContext: capabilities: add: ["IPC_LOCK"] resources: requests: nvidia.com/mlnxnics: '1' limits: nvidia.com/mlnxnics: '1'

Copy
Copied!
            

kubectl apply -f pod.yaml

Verify the deployment:

Copy
Copied!
            

kubectl exec -it sriov-ib-test-pod -- ibv_devices kubectl exec -it sriov-ib-test-pod -- ibstat

Complete Configuration

Copy
Copied!
            

apiVersion: mellanox.com/v1alpha1 kind: NicClusterPolicy metadata: name: nic-cluster-policy spec: ofedDriver: image: doca-driver repository: nvcr.io/nvidia/mellanox version: doca3.1.0-25.07-0.9.7.0-0 nvIpam: image: nvidia-k8s-ipam repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 imagePullSecrets: [] enableWebhook: false secondaryNetwork: cniPlugins: image: plugins repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 multus: image: multus-cni repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 --- apiVersion: nv-ipam.nvidia.com/v1alpha1 kind: IPPool metadata: name: sriov-ib-pool namespace: nvidia-network-operator spec: subnet: 192.168.6.0/24 perNodeBlockSize: 50 gateway: 192.168.6.1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: infiniband-sriov namespace: nvidia-network-operator spec: deviceType: netdevice mtu: 1500 nodeSelector: feature.node.kubernetes.io/pci-15b3.present: "true" nicSelector: vendor: "15b3" linkType: IB isRdma: true numVfs: 8 priority: 90 resourceName: mlnxnics --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: sriov-ib-network namespace: nvidia-network-operator spec: ipam: | { "type": "nv-ipam", "poolName": "sriov-ib-pool" } resourceName: mlnxnics linkState: enable networkNamespace: default --- apiVersion: v1 kind: Pod metadata: name: sriov-ib-test-pod annotations: k8s.v1.cni.cncf.io/networks: sriov-ib-network spec: containers: - name: test-container image: mellanox/rping-test command: ["/bin/bash", "-c", "sleepinfinity"] securityContext: capabilities: add: ["IPC_LOCK"] resources: requests: nvidia.com/mlnxnics: '1' limits: nvidia.com/mlnxnics: '1'

Previous Deploy MacVLAN Network with RDMA Shared Device
Next NVIDIA Network Operator Deployment Guide with Kubernetes
© Copyright 2025, NVIDIA. Last updated on Aug 26, 2025.