RDG for DPF Zero Trust (DPF-ZT) with OVN VPC DPU service Home

Performance and Isolation Tests

Now that the test deployment is running, perform bandwidth and latency performance tests between two bare-metal workload servers.

Note

Ubuntu 24.04 was installed on the servers.

Connect to a first Workload Server console, install iperf, perftest, set number of VFs, dhcp client, check VF2 IP address, and identify the relevant RDMA device:

First Pod Console

Copy
Copied!
            

root@worker1:~# apt install iperf3 root@worker1:~# apt install perftest root@worker1:~# lspci | grep nox 2b:00.0 Ethernet controller: Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller (rev 01) 2b:00.1 Ethernet controller: Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller (rev 01)   root@worker1:~# echo 8 > /sys/bus/pci/devices/0000\:2b:00.0/sriov_numvfs root@worker1:~# apt install isc-dhcp-client root@worker1:~# dhclient -1 -v ens1f0v2 root@worker1:~# ip a s ... 10: ens1f0v2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 9a:a0:c0:94:16:88 brd ff:ff:ff:ff:ff:ff altname enp43s0f0v2 inet 192.178.0.3/16 brd 192.178.255.255 scope global dynamic ens1f0v2 valid_lft 2395sec preferred_lft 2395sec inet6 fe80::98a0:c0ff:fe94:1688/64 scope link valid_lft forever preferred_lft forever ...   depuser@worker2:~$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=5.35 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=5.10 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=5.15 ms   root@worker1:~# rdma link | grep ens1f0v2 link mlx5_4/1 state ACTIVE physical_state LINK_UP netdev ens1f0v2

Using another console window , reconnect to the jump node and connect to a second Workload Server .

From within the servers, install iperf, perftest , check DPU Hight Speed Interfaces, set route to ethernet and identify the relevant RDMA device:

First Pod Console

Copy
Copied!
            

root@worker1:~# apt install iperf3 root@worker1:~# apt install perftest root@worker1:~# lspci | grep nox 2b:00.0 Ethernet controller: Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller (rev 01) 2b:00.1 Ethernet controller: Mellanox Technologies MT43244 BlueField-3 integrated ConnectX-7 network controller (rev 01)   root@worker1:~# echo 8 > /sys/bus/pci/devices/0000\:2b:00.0/sriov_numvfs root@worker1:~# apt install isc-dhcp-client root@worker1:~# dhclient -1 -v ens1f0v2 root@worker1:~# ip a s ... 10: ens1f0v2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ba:76:70:05:bc:ff brd ff:ff:ff:ff:ff:ff altname enp43s0f0v2 inet 192.178.0.2/16 brd 192.178.255.255 scope global dynamic ens1f0v2 valid_lft 3071sec preferred_lft 3071sec inet6 fe80::b876:70ff:fe05:bcff/64 scope link valid_lft forever preferred_lft forever ...   depuser@worker2:~$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=5.35 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=5.10 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=5.15 ms   root@worker2:~# rdma link | grep ens1f0v2 link mlx5_4/1 state ACTIVE physical_state LINK_UP netdev ens1f0v2

Move back to the first server console.

Start the iperf server side:

First BM Server Console

Copy
Copied!
            

root@worker1:~# iperf3 -s ----------------------------------------------------------- Server listening on 5201 (test #1) -----------------------------------------------------------

Move to the second server console.

Start the iperf client side:

Second BM Server Console

Copy
Copied!
            

root@worker2:~# iperf3 -c 192.178.0.3 -P 16 Connecting to host 192.178.0.3, port 5201 [ 5] local 192.178.0.2 port 46348 connected to 192.178.0.3 port 5201 [ 7] local 192.178.0.2 port 46360 connected to 192.178.0.3 port 5201 [ 9] local 192.178.0.2 port 46368 connected to 192.178.0.3 port 5201 [ 11] local 192.178.0.2 port 46372 connected to 192.178.0.3 port 5201 [ 13] local 192.178.0.2 port 46376 connected to 192.178.0.3 port 5201 [ 15] local 192.178.0.2 port 46378 connected to 192.178.0.3 port 5201 [ 17] local 192.178.0.2 port 46382 connected to 192.178.0.3 port 5201 [ 19] local 192.178.0.2 port 46384 connected to 192.178.0.3 port 5201 [ 21] local 192.178.0.2 port 46396 connected to 192.178.0.3 port 5201 [ 23] local 192.178.0.2 port 46402 connected to 192.178.0.3 port 5201 [ 25] local 192.178.0.2 port 46410 connected to 192.178.0.3 port 5201 [ 27] local 192.178.0.2 port 46424 connected to 192.178.0.3 port 5201 [ 29] local 192.178.0.2 port 46438 connected to 192.178.0.3 port 5201 [ 31] local 192.178.0.2 port 46454 connected to 192.178.0.3 port 5201 [ 33] local 192.178.0.2 port 46466 connected to 192.178.0.3 port 5201 [ 35] local 192.178.0.2 port 46472 connected to 192.178.0.3 port 5201   [ ID] Interval Transfer Bandwidth [ 3] 0.0000-10.0058 sec 14.1 GBytes 12.1 Gbits/sec [ 13] 0.0000-10.0057 sec 14.2 GBytes 12.2 Gbits/sec [ 7] 0.0000-10.0056 sec 13.4 GBytes 11.5 Gbits/sec [ 12] 0.0000-10.0057 sec 15.2 GBytes 13.1 Gbits/sec [ 4] 0.0000-10.0058 sec 14.1 GBytes 12.1 Gbits/sec [ 11] 0.0000-10.0058 sec 15.8 GBytes 13.6 Gbits/sec [ 8] 0.0000-10.0057 sec 13.9 GBytes 11.9 Gbits/sec [ 9] 0.0000-10.0058 sec 13.8 GBytes 11.9 Gbits/sec [ 15] 0.0000-10.0057 sec 14.3 GBytes 12.3 Gbits/sec [ 16] 0.0000-10.0058 sec 14.6 GBytes 12.5 Gbits/sec [ 1] 0.0000-10.0057 sec 14.6 GBytes 12.6 Gbits/sec [ 6] 0.0000-10.0058 sec 13.1 GBytes 11.3 Gbits/sec [ 14] 0.0000-10.0059 sec 13.6 GBytes 11.6 Gbits/sec [ 10] 0.0000-10.0055 sec 13.5 GBytes 11.6 Gbits/sec [ 2] 0.0000-10.0057 sec 14.0 GBytes 12.0 Gbits/sec [ 5] 0.0000-10.0058 sec 14.6 GBytes 12.6 Gbits/sec [SUM] 0.0000-10.0010 sec 227 GBytes 195 Gbits/sec

Return to the first server console.

Start the ib_read_lat server side:

First BM Server Console

Copy
Copied!
            

root@worker1:~# ib_read_lat -F -n 20000 -d mlx5_4   ************************************ * Waiting for client to connect... * ************************************

Move to the second server console.

Start the ib_read_lat client side:

Second BM Server Console

Copy
Copied!
            

root@worker2:~# ib_read_lat -F -n 20000 -d mlx5_4 192.178.0.3   --------------------------------------------------------------------------------------- RDMA_Read Latency Test Dual-port : OFF Device : mlx5_4 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : ON TX depth : 1 Mtu : 1024[B] Link type : Ethernet GID index : 3 Outstand reads : 16 rdma_cm QPs : OFF Data ex. method : Ethernet --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x0108 PSN 0xa5a4e OUT 0x10 RKey 0x031005 VAddr 0x005a7a24ef7000 GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:178:00:02 remote address: LID 0000 QPN 0x0108 PSN 0x6caf0 OUT 0x10 RKey 0x031005 VAddr 0x006264a9e00000 GID: 00:00:00:00:00:00:00:00:00:00:255:255:192:178:00:03 --------------------------------------------------------------------------------------- #bytes #iterations t_min[usec] t_max[usec] t_typical[usec] t_avg[usec] t_stdev[usec] 99% percentile[usec] 99.9% percentile[usec]  2 20000 10.51 73.16 13.81 15.35 4.74 29.66 42.23 ---------------------------------------------------------------------------------------

Return to the first server console.

Start the ib_write_bw server side:

First BM Server Console

Copy
Copied!
            

root@worker1:~# ib_write_bw -s 1048576 -F -D 30 -q 64 -d mlx5_4   ************************************ * Waiting for client to connect... * ************************************

Move to the second server console.

Start the ib_write_bw client side:

Second BM Server Console

Copy
Copied!
            

root@worker2:~# ib_write_bw -s 1048576 -F -D 30 -q 64 -d mlx5_4 192.178.0.3 --report_gbit  --------------------------------------------------------------------------------------- RDMA_Write BW Test Dual-port : OFF Device : mlx5_4 Number of qps : 64 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : ON TX depth : 128 CQ Moderation : 1 Mtu : 1024[B] Link type : Ethernet GID index : 3 Max inline data : 0[B] rdma_cm QPs : OFF Data ex. method : Ethernet --------------------------------------------------------------------------------------- … --------------------------------------------------------------------------------------- #bytes #iterations BW peak[Gb/sec] BW average[Gb/sec] MsgRate[Mpps] 1048576 448865 0.00 235.89 0.028120 ---------------------------------------------------------------------------------------

Finally, verify that the two servers running on different networks—using virtual functions on the RED VPC and the PBLUE VPC can't communicate with each other.

Run the Iperf3 test between the Worker1 to the Worker3.

Start the iperf3 server side:

First BM Server Console

Copy
Copied!
            

root@worker1:~# iperf3 -s ----------------------------------------------------------- Server listening on 5201 (test #1) -----------------------------------------------------------

Move to the second server console.

Start the iperf client side:

Second BM Server Console

Copy
Copied!
            

root@worker3:~# iperf3 -c 192.178.0.3 -P 16 iperf3: error - unable to connect to server - server may have stopped running or use a different port, firewall issue, etc.: Connection refused

This ping operation should fail due to the network isolation implemented in HBN using different VLANs, VNIs and VRFs.

© Copyright 2025, NVIDIA. Last updated on Jul 17, 2025.