NVIDIA Network Operator v25.7.0

Deploy MacVLAN Network with RDMA Shared Device

Step 1: Create NicClusterPolicy with RDMA shared device

Copy
Copied!
            

apiVersion: mellanox.com/v1alpha1 kind: NicClusterPolicy metadata: name: nic-cluster-policy spec: rdmaSharedDevicePlugin: image: k8s-rdma-shared-dev-plugin repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 config: | { "configList": [ { "resourceName": "rdma_shared_device_a", "rdmaHcaMax": 63, "selectors": { "ifNames": ["ens1f0"] } } ] } nvIpam: image: nvidia-k8s-ipam repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 imagePullSecrets: [] enableWebhook: false secondaryNetwork: cniPlugins: image: plugins repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 multus: image: multus-cni repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0

Copy
Copied!
            

kubectl apply -f nicclusterpolicy.yaml

Step 2: Create IPPool for nv-ipam

Copy
Copied!
            

apiVersion: nv-ipam.nvidia.com/v1alpha1 kind: IPPool metadata: name: macvlan-pool namespace: nvidia-network-operator spec: subnet: 192.168.4.0/24 perNodeBlockSize: 50 gateway: 192.168.4.1

Copy
Copied!
            

kubectl apply -f ippool.yaml

Step 3: Create MacvlanNetwork

Copy
Copied!
            

apiVersion: mellanox.com/v1alpha1 kind: MacvlanNetwork metadata: name: macvlan-network spec: networkNamespace: "default" master: "ens1f0" mode: "bridge" mtu: 1500 ipam: | { "type": "nv-ipam", "poolName": "macvlan-pool" }

Copy
Copied!
            

kubectl apply -f macvlannetwork.yaml

Step 4: Deploy test workload

Copy
Copied!
            

apiVersion: v1 kind: Pod metadata: name: macvlan-test-pod annotations: k8s.v1.cni.cncf.io/networks: macvlan-network spec: containers: - name: test-container image: mellanox/rping-test command: ["/bin/bash", "-c", "sleepinfinity"] securityContext: capabilities: add: ["IPC_LOCK"] resources: requests: rdma/rdma_shared_device_a: 1 limits: rdma/rdma_shared_device_a: 1

Copy
Copied!
            

kubectl apply -f pod.yaml

Verify the deployment:

Copy
Copied!
            

kubectl exec -it macvlan-test-pod -- ip addr show kubectl exec -it macvlan-test-pod -- ibv_devinfo

Complete Configuration

Copy
Copied!
            

apiVersion: mellanox.com/v1alpha1 kind: NicClusterPolicy metadata: name: nic-cluster-policy spec: rdmaSharedDevicePlugin: image: k8s-rdma-shared-dev-plugin repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 config: | { "configList": [ { "resourceName": "rdma_shared_device_a", "rdmaHcaMax": 63, "selectors": { "ifNames": ["ens1f0"] } } ] } nvIpam: image: nvidia-k8s-ipam repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 imagePullSecrets: [] enableWebhook: false secondaryNetwork: cniPlugins: image: plugins repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 multus: image: multus-cni repository: nvcr.io/nvidia/mellanox version: network-operator-v25.7.0 --- apiVersion: nv-ipam.nvidia.com/v1alpha1 kind: IPPool metadata: name: macvlan-pool namespace: nvidia-network-operator spec: subnet: 192.168.4.0/24 perNodeBlockSize: 50 gateway: 192.168.4.1 --- apiVersion: mellanox.com/v1alpha1 kind: MacvlanNetwork metadata: name: macvlan-network spec: networkNamespace: "default" master: "ens1f0" mode: "bridge" mtu: 1500 ipam: | { "type": "nv-ipam", "poolName": "macvlan-pool" } --- apiVersion: v1 kind: Pod metadata: name: macvlan-test-pod annotations: k8s.v1.cni.cncf.io/networks: macvlan-network spec: containers: - name: test-container image: mellanox/rping-test command: ["/bin/bash", "-c", "sleepinfinity"] securityContext: capabilities: add: ["IPC_LOCK"] resources: requests: rdma/rdma_shared_device_a: 1 limits: rdma/rdma_shared_device_a: 1

Previous Deploy IP over InfiniBand with RDMA Shared Device
Next Deploy SR-IOV InfiniBand Network with RDMA
© Copyright 2025, NVIDIA. Last updated on Aug 26, 2025.