NVIDIA Network Operator v25.7.0

Host Device Network with RDMA

Step 1: Create NicClusterPolicy with host device support

Copy
Copied!
            

apiVersion: mellanox.com/v1alpha1 kind: NicClusterPolicy metadata: name: nic-cluster-policy spec: sriovDevicePlugin: image: sriov-network-device-plugin repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 config: | { "resourceList": [ { "resourcePrefix": "nvidia.com", "resourceName": "hostdev", "selectors": { "vendors": ["15b3"], "isRdma": true } } ] } nvIpam: image: nvidia-k8s-ipam repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 imagePullSecrets: [] enableWebhook: false secondaryNetwork: cniPlugins: image: plugins repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 multus: image: multus-cni repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0

Copy
Copied!
            

kubectl apply -f nicclusterpolicy.yaml

Step 2: Create IPPool for nv-ipam

Copy
Copied!
            

apiVersion: nv-ipam.nvidia.com/v1alpha1 kind: IPPool metadata: name: hostdev-pool namespace: nvidia-network-operator spec: subnet: 192.168.3.0/24 perNodeBlockSize: 50 gateway: 192.168.3.1

Copy
Copied!
            

kubectl apply -f ippool.yaml

Step 3: Create HostDeviceNetwork

Copy
Copied!
            

apiVersion: mellanox.com/v1alpha1 kind: HostDeviceNetwork metadata: name: hostdev-net spec: networkNamespace: "default" resourceName: "hostdev" ipam: | { "type": "nv-ipam", "poolName": "hostdev-pool" }

Copy
Copied!
            

kubectl apply -f hostdevicenetwork.yaml

Step 4: Deploy test workload

Copy
Copied!
            

apiVersion: v1 kind: Pod metadata: name: hostdev-test-pod annotations: k8s.v1.cni.cncf.io/networks: hostdev-net spec: containers: - name: test-container image: mellanox/rping-test command: ["/bin/bash", "-c", "sleepinfinity"] securityContext: capabilities: add: ["IPC_LOCK"] resources: requests: nvidia.com/hostdev: '1' limits: nvidia.com/hostdev: '1'

Copy
Copied!
            

kubectl apply -f pod.yaml

Verify the deployment:

Copy
Copied!
            

kubectl exec -it hostdev-test-pod -- lspci | grep Mellanox

Complete Configuration

Copy
Copied!
            

apiVersion: mellanox.com/v1alpha1 kind: NicClusterPolicy metadata: name: nic-cluster-policy spec: sriovDevicePlugin: image: sriov-network-device-plugin repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 config: | { "resourceList": [ { "resourcePrefix": "nvidia.com", "resourceName": "hostdev", "selectors": { "vendors": ["15b3"], "isRdma": true } } ] } nvIpam: image: nvidia-k8s-ipam repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 imagePullSecrets: [] enableWebhook: false secondaryNetwork: cniPlugins: image: plugins repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 multus: image: multus-cni repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 --- apiVersion: nv-ipam.nvidia.com/v1alpha1 kind: IPPool metadata: name: hostdev-pool namespace: nvidia-network-operator spec: subnet: 192.168.3.0/24 perNodeBlockSize: 50 gateway: 192.168.3.1 --- apiVersion: mellanox.com/v1alpha1 kind: HostDeviceNetwork metadata: name: hostdev-net spec: networkNamespace: "default" resourceName: "hostdev" ipam: | { "type": "nv-ipam", "poolName": "hostdev-pool" } --- apiVersion: v1 kind: Pod metadata: name: hostdev-test-pod annotations: k8s.v1.cni.cncf.io/networks: hostdev-net spec: containers: - name: test-container image: mellanox/rping-test command: ["/bin/bash", "-c", "sleepinfinity"] securityContext: capabilities: add: ["IPC_LOCK"] resources: requests: nvidia.com/hostdev: '1' limits: nvidia.com/hostdev: '1'

Previous Deploy SR-IOV Network with RDMA
Next Deploy IP over InfiniBand with RDMA Shared Device
© Copyright 2025, NVIDIA. Last updated on Aug 26, 2025.