NVIDIA Accelerated IO (XLIO) Documentation Rev 1.2.9
1.0

Known Issues

The following is a list of general existing limitations and known issues of the various components of XLIO.

Warning

Since XLIO has been inherited from Messaging Accelerator (VMA) v9.2.2, some issues point to (VMA).

Internal Ref. Number

Details

2667588

Description: IPv6 fragments are not supported. XLIO neither parses IPv6 Rx fragments nor transmits fragmented IPv6 packets. Tx of IPv6 UDP messages larger than MTU will fail.

Workaround: N/A

Keywords: Packet fragmentation; XLIO; IPv6

Discovered in Version: 1.2.9

2667588

Description: IPv4-mapped IPv6 addresses are not supported. APIs such as bind, connect, recvfrom would consider IPv4-mapped IPv6 address as regular IPv6 address.

Workaround: N/A

Keywords: IPv6 address; XLIO

Discovered in Version: 1.2.9

2667588

Description: XLIO conf file does not support IPv6 (see XLIO Configuration Parameters).

Workaround: N/A

Keywords: xlio.conf; IPv6

Discovered in Version: 1.2.9

2667588

Description: SocketXtreme API does not support IPv6.

Workaround: N/A

Keywords: Extra API; SocketXtreme; IPv6

Discovered in Version: 1.2.9

2667588

Description: IPv6 multicast is not supported.

Workaround: N/A

Keywords: Multicast; IPv6

Discovered in Version: 1.2.9

2667588

Description: Receive Zero copy API does not support IPv6ز

Workaround: N/A

Keywords: IPv6; zero-copy

Discovered in Version: 1.2.9

2667588

Description: Peer Notification Service does not support IPv6 connections (see Peer Notification Service).

Workaround: N/A

Keywords: Peer Notification Service; IPv6

Discovered in Version: 1.2.9

21371706

Description: Socket API breaks for closed UDP sockets when using XLIO_NGINX_UDP_POOL_SIZE != 0, as the socket is not entirely closed. Socket operations will not fail after the socket is closed.

Workaround: N/A

Keywords: UDP; pool

Discovered in Version: 1.2.9

1542628

Description: Device memory programming is not supported on VMs that lack Blue Flame support.

Workaround: N/A

Keywords: MEMIC, Device Memory, Virtual Machine, Blue flame

Discovered in Version: 1.0.6

Description: If XLIO runs when XLIO_HANDLE_SIGINTR is enabled, an error message might be written upon exiting.

Workaround: Ignore the error message, or run XLIO with XLIO_HANDLE_SIGINTR disabled.

Keywords: XLIO_HANDLE_SIGINTR, error message

Discovered in Version: 1.0.6

2783472

Description: The following XLIO_ERRORs will be displayed when running ping with root permissions:XLIO ERROR: ring_simple[0x7f257d18d720]:256:create_resources() ibv_create_comp_channel for tx failed. m_p_tx_comp_event_channel = (nil) (errno=13 Permission denied)
XLIO ERROR: ib_ctx_handler213:mem_dereg() failed de-registering a memory region (errno=13 Permission denied)

Workaround: N/A

Keywords: XLIO_ERROR while running ping with root permissions

Discovered in Version: 1.0.6

965237

Description: The following sockets APIs are directed to the OS and are not offloaded by XLIO:

· int socketpair(int domain, int type, int protocol, int sv[2]);

· int dup(int oldfd);

· int dup2(int oldfd, int newfd);

Workaround: N/A

Keywords: sockets, socketpair, dup, dup2

965227

Description: Multicast (MC) loopback within a process is not supported by XLIO:

  • If an application process opens 2 (or more) sockets on the same MC group they will not get each other's traffic.
    Note: MC loopback between different XLIO processes always work.

  • Both sockets will receive all ingress traffic coming from the wire

Workaround: N/A

Keywords: Multicast, Loopback

965227

Description: MC loopback between XLIO and the OS limitation.

  • The OS will reject loopback traffic coming from the NIC

  • MC traffic from the OS to XLIO is functional

Workaround: N/A

Keywords: Multicast, Loopback

965227

Description: MC loopback Tx is currently disabled and setsockopt (IP_MULTICAST_LOOP) is not supported.

Workaround: N/A

Keywords: Multicast, Loopback

1011005

Description: XLIO SELECT option supports up to 200 sockets in TCP.

Workaround: Use ePoll that supports up to 6000 sockets.

Keywords: SockPerf

977899

Description: An unsuccessful trial to connect to a local interface, is reported by XLIO as Connection timeout rather that Connection refused.

Workaround: N/A

Keywords: Verification

1019085

Description: Poll is limited with the amount of sockets.

Workaround: Use ePoll for large amount of sockets (tested up to 6000)

Keywords: Poll, ePoll

Description: The following XLIO_PANIC will be displayed when there are not enough open files defined on the server:XLIO PANIC : si[fd=1023]:51:sockinfo() failed to create internal epoll (ret=-1 Too many open files

Workaround: Verify that the number of max open FDs (File Descriptors) in the system (ulimit -n) is twice as number of needed sockets. XLIO internal logic requires one additional FD per offloaded socket.

Keywords: XLIO_PANIC while opening large number of sockets

Description: Applications written in Java use IPv6 by default which is not supported by XLIO and may lead to XLIO not offloading packets

Workaround: To change Java to work with IPv4, instruct the application to use “Java -Djava.net.preferIPv4Stack=true”

Keywords: Java applications using IPv6 stack

Description: When a XLIO -enabled application is running, there are several cases when it does not exit as expected pressing CTRL-C.

Workaround: Enable SIGINT handling in XLIO, by using:#export XLIO_HANDLE_SIGINTR=1

Keywords: The XLIO application does not exit when you press CTRL-C.

Description: XLIO does not support network interface or route changes during runtime

Workaround: N/A

Keywords: Dynamic route or IP changes

Description: XLIO behavior of epoll EPOLLET (Edge Triggered) and EPOLLOUT flags with TCP sockets differs between OS and XLIO.

  • XLIO – triggers EPOLLOUT event every received ACK (only data, not syn/fin)

  • OS – triggers EPOLLOUT event only after buffer was full

Workaround: N/A

Keywords: XLIO behavior of epoll EPOLLET (Edge Triggered) and EPOLLOUT flags with TCP sockets

Description: XLIO does not close connections located on the same node (sends FIN to peers) when its own process is terminated.

Workaround: N/A

Keywords: XLIO connection

Description: XLIO is not closed (sends FIN to peers) when its own process is terminated when the /etc/init.d/XLIO is stopped.

Workaround: Launch /etc/init.d/XLIO start

Keywords: XLIO connection

Description: When a non-offloaded process joins the same MC address as another XLIO process on the same machine, the non-offloaded process does not get any traffic.

Workaround: Run both processes with XLIO

Keywords: MC traffic with XLIO process and non XLIO process on the same machine

Description: Occasionally, epoll with EPOLLONESHOT does not function properly.

Workaround: N/A

Keywords: epoll with EPOLLONESHOT

Description: Occasionally, when running UDP SFNT-STREAM client with poll muxer flag, the client-side ends with an expected error:ERROR: Sync messages at end of test lost
ERROR: Test failed.

This only occurs with poll flag

Workaround: Set a higher acknowledgment waiting time value in the sfnt-stream.

Keywords: SFNT-STREAM UDP with poll muxer flag ends with an error on client-side

Description: Occasionally, SFNT-STREAM UDP client hangs when running multiple times.

Workaround: Set a higher acknowledgment waiting time value in the sfnt-stream.

Keywords: SFNT-STREAM UDP client hanging issue

Description: Ethernet loopback functions only if both sides are either off-loaded or not-offloaded.

Workaround: N/A

Keywords: Ethernet loopback is not functional between the XLIO and the OS

Description: The following error may occur when running netperf TCP tests with XLIO:remote error 107 'Transport endpoint is not connected

Workaround: Use netperf 2.6.0

Keywords: Error when running netperf 2.4.4 with XLIO

Description: Occasionally, a packet is not sent if the socket is closed immediately after send() (also for blocking socket)

Workaround: Wait several seconds after send() before closing the socket

Keywords: A packet is not sent if the socket is closed immediately after send()

Description: It can take for XLIO more time than the OS to return from an iomux call if all sockets in this iomux are empty sockets

Workaround: N/A

Keywords: Iomux call with empty sockets

Description: Sharing of HW resources between the different working threads might cause lock contentions which can affect performance.

Workaround: Use different ring allocation logic.

Keywords: Issues with performance with some multi-threaded applications

Description: XLIO does not support broadcast traffic.

Workaround: Use libxlio.conf to pass broadcast through OS

Keywords: No support for direct broadcast

Description: Directing XLIO to access non-valid memory area will cause a segmentation fault.

Workaround: N/A

Keywords: There is no non-valid pointer handling in XLIO

Description: XLIO allocates resources on the first connect/send operation, which might take up to several tens of milliseconds.

Workaround: N/A

Keywords: First connect/send operation might take more time than expected

Description: Calling select upon shutdown of socket will return “ready to write” instead of timeout.

Workaround: N/A

Keywords: Calling select() after shutdown (write) returns socket ready to write, while select() is expected to return timeout

Description: XLIO does not raise sigpipe in connection shutdown.

Workaround: N/A

Keywords: XLIO does not raise sigpipe

Description: XLIO polls the CQ for packets; if no packets are available in the socket layer, it takes longer.

Workaround: N/A

Keywords: When there are no packets in the socket, it takes longer to return from the read call

Description: Select with more than 1024 sockets is not supported

Workaround: Compile XLIO with SELECT_BIG_SETSIZE defined.

Keywords: 1024 sockets

Description: Possible performance degradation running NGINX QUIC.

Workaround: Use XLIO_CQ_MODERATION_ENABLE=0.

Keywords: QUIC, NGINX, XLIO_CQ_MODERATION_ENABLE

© Copyright 2023, NVIDIA. Last updated on May 23, 2023.