NVIDIA Messaging Accelerator (VMA) Documentation Rev 9.5.2
1.0

Known Issues

The following is a list of general existing limitations and known issues of the various components of VMA.

Internal Reference Number

Details

2371415

Description: In case the switch replicates a UDP packet for both ports, application will get a duplicated packet on a bonding interface.

Workaround: N/A

Keywords: RoCE LAG; Bonding

Discovered in Version: 9.2.2

2394023

Description: Active-backup fail_over_mac = 1 mode is not be supported on bonding interface.

Workaround: N/A

Keywords: Bonding

Discovered in Version: 9.2.2

2393571

Description: Detaching/attaching slaves on bond interface during runtime is not supported.

Workaround: N/A

Keywords: Bonding

Discovered in Version: 9.2.2

1554637

Description: VMA offloads a NetVSC device (Windows Hyper-V network driver) only if SR-IOV is enabled upon application initialization.

Workaround: N/A

Keywords: Windows Hypervisor, NetVSC

Discovered in Version: 8.7.5

1542628

Description: Device memory programming is not supported on VMs that lack Blue Flame support.

Workaround: N/A

Keywords: MEMIC, Device Memory, Virtual Machine, Blue flame

Discovered in Version: 8.7.5

-

Description: When working with Linux guest over Windows Hypervisor, and exceeding the maximum amount of flow steering rules supported by the VM, the following error message will appear:

Copy
Copied!
            

VMA ERROR: rfs[0x32ea410]:273:create_ibv_flow() Create of QP flow ID (tag: 0) failed with flow dst:5.5.5.77:6891, src:0.0.0.0:0, proto:UDP (errno=12 - Cannot allocate memory) VMA ERROR: ring_simple[0x3292d50]:555:attach_flow() attach_flow=0 failed!

As a result, TCP Receive Flow will not be supported, and UDP Receive Flow will not be offloaded.

Workaround: Reduce the amount of supported VMs for the device in the Hypervisor to increase the total flow steering rules amount for each VM.

Keywords: Windows Hypervisor, flow steering limit

Discovered in Version: 8.5.7

1201675

Description: When a non-privileged user uses VMA with RHEL inbox to perform networking operations (i.e. allocate IB resources) VMA crashes with a segmentation fault.

Workaround: Use VMA with root privileges on the RHEL inbox driver.

Keywords: RHEL inbox driver, segmentation fault

Discovered in Version: 8.4.10

Description: The following VMA_ERRORs will be displayed when running ping with root permissions:

Copy
Copied!
            

VMA ERROR: ring_simple[0x7f257d18d720]:256:create_resources() ibv_create_comp_channel for tx failed. m_p_tx_comp_event_channel = (nil) (errno=13 Permission denied) VMA ERROR: ib_ctx_handler213:mem_dereg() failed de-registering a memory region (errno=13 Permission denied)

Workaround: N/A

Keywords: VMA_ERROR while running ping with root permissions

Discovered in Version: 8.4.8

965237

Description: The following sockets APIs are directed to the OS and are not offloaded by VMA:

  • int socketpair(int domain, int type, int protocol, int sv[2]);

  • int dup(int oldfd);

  • int dup2(int oldfd, int newfd);

Workaround: N/A

Keywords: sockets, socketpair, dup, dup2

965227

Description: Multicast (MC) loopback within a process is not supported by VMA:

  • If an application process opens 2 (or more) sockets on the same MC group they will not get each other's traffic.

    Warning

    MC loopback between different VMA processes always work.

  • Both sockets will receive all ingress traffic coming from the wire

Workaround: N/A

Keywords: Multicast, Loopback

965227

Description: MC loopback between VMA and the OS limitation.

  • The OS will reject loopback traffic coming from the NIC

  • MC traffic from the OS to VMA is functional

Workaround: N/A

Keywords: Multicast, Loopback

965227

Description: MC loopback Tx is currently disabled and setsockopt (IP_MULTICAST_LOOP) is not supported.

Workaround: N/A

Keywords: Multicast, Loopback

919301

Description: VMA supports bonding in the following modes:

  • For active-passive (mode=1), use either fail_over_mac=0 or fail_over_mac=1.

  • For active-active (mode=4), use fail_over_mac=0.

  • For VLAN over bond, use fail_over_mac=0 for traffic to be offloaded

Workaround: N/A

Keywords: High Availability (HA)

1011005

Description: VMA SELECT option supports up to 200 sockets in TCP.

Workaround: Use ePoll that supports up to 6000 sockets.

Keywords: SockPerf

977899

Description: An unsuccessful trial to connect to a local interface, is reported by VMA as Connection timeout rather that Connection refused.

Workaround: N/A

Keywords: Verification

1019085

Description: Poll is limited with the amount of sockets.

Workaround: Use ePoll for large amount of sockets (tested up to 6000)

Keywords: Poll, ePoll

Description: VLAN on the bond interface does not function properly when bonding is configured with fail_over_mac=1 due to a kernel bug.

Workaround: Set the fail_over_mac=0

Keywords: VLAN and High Availability (HA)

Description: If LACP is not configured properly on the switch, received multicast traffic might be duplicated

Workaround: Make sure LACP is configured properly, or move to kernel > 3.14 or RH7.2 and higher as they already include a fix

Keywords: LACP High Availability (HA) – multiple MC traffic

Description: RX UDP UC and MC traffic in Ethernet and RX UDP UC in InfiniBand with fragmented packages (message size is larger than MTU) is not offloaded by VMA and will pass through the Kernel network stack. There might be performance degradation.

Workaround: N/A

Keywords: Issues with UDP fragmented traffic reassembly

Description: The following VMA_PANIC will be displayed when there are not enough open files defined on the server:

Copy
Copied!
            

VMA PANIC : si[fd=1023]:51:sockinfo() failed to create internal epoll (ret=-1 Too many open files)

Workaround: Verify that the number of max open FDs (File Descriptors) in the system (ulimit -n) is twice as number of needed sockets. VMA internal logic requires one additional FD per offloaded socket.

Keywords: VMA_PANIC while opening large number of sockets

Description: In MLNX_OFED versions earlier than v4.0-2.0.0.0 and VMA v8.2.10, socket API is only supported in the child process if the parent process has not called any socket routines prior to calling fork.
In MLNX_OFED versions 4.0-1.6.1.0, VMA 8.2.10 and later the above restriction no longer exists however the child process cannot use any of the parent’s socket resources.

Workaround: VMA supports fork() if VMA_FORK=1 (is enabled) and the Mellanox-supported stack OFED 4.0-1.6.1.0 or later is used. MLNX_OFED support for fork is for kernels supporting the MADV_DONTFORK flag for madvise() (2.6.17 and later), provided that the application does not use threads.

The Posix system() call is supported.

Keywords: There is limited support for fork().

Description: Applications written in Java use IPv6 by default which is not supported by VMA and may lead to VMA not offloading packets

Workaround: To change Java to work with IPv4, instruct the application to use “Java -Djava.net.preferIPv4Stack=true”

Keywords: Java applications using IPv6 stack

Description: When a VMA-enabled application is running, there are several cases when it does not exit as expected pressing CTRL-C.

Workaround: Enable SIGINT handling in VMA, by using:

Copy
Copied!
            

#export VMA_HANDLE_SIGINTR=1

Keywords: The VMA application does not exit when you press CTRL-C.

Description: VMA does not support network interface or route changes during runtime

Workaround: N/A

Keywords: Dynamic route or IP changes

Description: The send rate is higher than the receive rate. Therefore when running one sockperf server with one sockperf client there will be packets loss.

Workaround: Limit the sender max PPS per receiver capacity.

Example below with the following configuration:

  • O.S: Red Hat Enterprise

  • Linux Server release 6.2 (Santiago) Kernel \r on an \m

  • Kernel: 2.6.32-220.el6.x86_64

  • link layer: InfiniBand 56G Etheronet 10G

  • GEN type: GEN3

  • Architecture: x86_64

  • CPU: 16

  • Core(s) per socket: 8

  • CPU socket(s): 2

  • NUMA node(s): 2

  • Vendor ID: GenuineIntel

  • CPU family: 6

  • Model: 45

  • Stepping: 7

  • CPU MHz: 2599.926

Copy
Copied!
            

MC 1 socket max pps 3M MC 10 sockets (select) max pps 1.5M MC 20 sockets (select) max pps 1.5M MC 50 sockets (select) max pps 1M UC 1 socket max pps 2.8M UC 10 sockets max pps 1.5M UC 20 sockets max pps 1.5M UC 50 sockets max pps

Keywords: Packets loss occurs when running sockperf with max pps rate

Description: VMA behavior of epoll EPOLLET (Edge Triggered) and EPOLLOUT flags with TCP sockets differs between OS and VMA.

  • VMA –triggers EPOLLOUT event every received ACK (only data, not syn/fin)

  • OS –triggers EPOLLOUT event only after buffer was full.

Workaround: N/A

Keywords: VMA behavior of epoll EPOLLET (Edge Triggered) and EPOLLOUT flags with TCP sockets

Description: VMA does not close connections located on the same node (sends FIN to peers) when its own process is terminated.

Workaround: N/A

Keywords: VMA connection

Description: VMA is not closed (sends FIN to peers) when its own process is terminated when the /etc/init.d/vma is stopped.

Workaround: Launch /etc/init.d/vma start

Keywords: VMA connection

Description: When a non-offloaded process joins the same MC address as another VMA process on the same machine, the non-offloaded process does not get any traffic.

Workaround: Run both processes with VMA

Keywords: MC traffic with VMA process and non VMA process on the same machine

Description: Occasionally, epoll with EPOLLONESHOT does not function properly.

Workaround: N/A

Keywords: epoll with EPOLLONESHOT

Description: Occasionally, when running UDP SFNT-STREAM client with poll muxer flag, the client side ends with an expected error:

Copy
Copied!
            

ERROR: Sync messages at end of test lost ERROR: Test failed.

This only occurs with poll flag

Workaround: Set a higher acknowledgment waiting time value in the sfnt-stream.

Keywords: SFNT-STREAM UDP with poll muxer flag ends with an error on client side

Description: Occasionally, SFNT-STREAM UDP client hangs when running multiple times.

Workaround: Set a higher acknowledgment waiting time value in the sfnt-stream.

Keywords: SFNT-STREAM UDP client hanging issue

Description: MC loopback in InfiniBand functions only between 2 different processes. It will not work between threads in the same process.

Workaround: N/A

Keywords: MC loopback in InfiniBand

Description: Ethernet loopback functions only if both sides are either off-loaded or not-offloaded.

Workaround: N/A

Keywords: Ethernet loopback is not functional between the VMA and the OS

Description: The following error may occur when running netperf TCP tests with VMA:

Copy
Copied!
            

remote error 107 'Transport endpoint is not connected

Workaround: Use netperf 2.6.0

Keywords: Error when running netperf 2.4.4 with VMA

Description: Occasionally, a packet is not sent if the socket is closed immediately after send() (also for blocking socket)

Workaround: Wait several seconds after send() before closing the socket

Keywords: A packet is not sent if the socket is closed immediately after send()

Description: It can take for VMA more time than the OS to return from an iomux call if all sockets in this iomux are empty sockets

Workaround: N/A

Keywords: Iomux call with empty sockets

Description: TCP throughput with maximum rate may suffer from traffic "hiccups".

Workaround: Set the mps = 1000000

Keywords: TCP throughput with maximum rate

Description: Netcat with VMA on SLES 11 SP1 does not function.

Workaround: N/A

Keywords: Netcat on SLES11 SP1

Description: Sharing of HW resources between the different working threads might cause lock contentions which can affect performance.

Workaround: Use different ring allocation logics.

Keywords: Issues with performance with some multi-threaded applications

Description: Known NetPIPE bug - Netpipe is trying to access read-only memory.

Workaround: Upgrade to NetPIPE 3.7 or later.

Keywords: Segmentation fault on NetPIPE exit.

Description: If VMA runs when VMA_HANDLE_SIGINTR is enabled, an error message might be written upon exiting.

Workaround: Ignore the error message, or run VMA with VMA_HANDLE_SIGINTR disabled.

Keywords: When exiting, VMA logs errors when the VMA_HANDLE_SIGINTR is enabled.

Description: VMA suffers from high latency in low message rates.

Workaround: Use “Dummy Send”.

Keywords: VMA ping-pong latency degradation as PPS is lowered

Description: VMA does not support broadcast traffic.

Workaround: Use libvma.conf to pass broadcast through OS

Keywords: No support for direct broadcast

Description: Directing VMA to access non-valid memory area will cause a segmentation fault.

Workaround: N/A

Keywords: There is no non-valid pointer handling in VMA

Description: VMA allocates resources on the first connect/send operation, which might take up to several tens of milliseconds.

Workaround: N/A

Keywords: First connect/send operation might take more time than expected

Description: Calling select upon shutdown of socket will return “ready to write” instead of timeout.

Workaround: N/A

Keywords: Calling select() after shutdown (write) returns socket ready to write, while select() is expected to return timeout

Description: VMA does not raise sigpipe in connection shutdown.

Workaround: N/A

Keywords: VMA does not raise sigpipe

Description: VMA polls the CQ for packets; if no packets are available in the socket layer, it takes longer.

Workaround: N/A

Keywords: When there are no packets in the socket, it takes longer to return from the read call

Description: Select with more than 1024 sockets is not supported

Workaround: Compile VMA with SELECT_BIG_SETSIZE defined.

Keywords: 1024 sockets

© Copyright 2023, NVIDIA. Last updated on May 23, 2023.