Monitoring – the vma_stats Utility
Networking applications open various types of sockets.
The VMA library holds the following counters:
- Per socket state and performance counters
- Internal performance counters which accumulate information for select(), poll() and epoll_wait() usage by the whole application. An additional performance counter logs the CPU usage of VMA during select(), poll(), or epoll_wait() calls. VMA calculates this counter only if VMA_CPU_USAGE_STATS parameter is enabled, otherwise this counter is not in use and displays the default value as zero.
- VMA internal CQ performance counters
- VMA internal RING performance counters
- VMA internal Buffer Pool performance counters
Use the included vma_stats utility to view the per-socket information and performance counters during runtime.
Note: For TCP connections, vma_stats shows only offloaded traffic, and not "os traffic."
The following table lists the basic and additional vma_stats utility options.
Shows VMA statistics for a process with pid: <pid>.
Sets shared memory directory path to <directory>
Shows VMA statistics for application: <application>
Finds PID and shows statistics for the VMA instance running (default).
When you set this flag to inactive, shared objects (files) are not removed.
Prints a report every <n> seconds.
Default: 1 sec
Do <n> report print cycles and exit, use 0 value for infinite.
Sets the view type:
Shows netstat like view of all sockets.
Sets the details mode:
Dumps statistics for fd number <fd> using log level <level>. Use 0 value for all open fds.
Sets the VMA log level to <level> (1 <= level <= 7).
Sets the VMA log detail level to <level> (0 <= level <= 3).
Logs only sockets that match <list> or <range> format: 4-16 or 1,9 (or combination).
Prints the version number.
Prints a help message.
The following sections contain examples of the vma_stats utility.
The following example demonstrates basic use of the vma_stats utility.
If there is only a single process running over VMA, it is not necessary to use the –p option, since vma_stats will automatically recognize the process.
If no process with a suitable pid is running over the VMA, the output is:
If an appropriate process was found, the output is:
- A single socket with user fd=14 was created
- Received 140479898 packets, 274374 Kilobytes via the socket
- Transmitted 140479898 packets, 274374 Kilobytes via the socket
- All the traffic was offloaded. No packets were transmitted or received via the OS.
- There were no missed Rx polls (see VMA_RX_POLL). This implies that the receiving thread did not enter a blocked state, and therefore there was no context switch to hurt latency.
- There are no transmission or reception errors on this socket
vma_stats presents not only cumulative statistics, but also enables you to view deltas of VMA counter updates. This example demonstrates the use of the "deltas" mode.
- Three sockets were created (fds: 15, 19, and 23)
- Received 15186 packets, 29 Kilobytes during the last second via fds: 15 and 19
- Transmitted 15186 packets, 29 Kbytes during the last second via fds: 15 and 19
- Not all the traffic was offloaded, as fd 23: 15185 packets, 22 KBytes were transmitted and received via the OS. This means that fd 23 was used for unicast traffic.
- No transmission or reception errors were detected on any socket
- The application used select for I/O multiplexing
- 45557 packets were placed in socket ready queues (over the course of the last second): 30372 of them offloaded (15186 via fd 15 and 15186 via fd 19), and 15185 were received via the OS (through fd 23)
- There were no missed Select polls (see VMA_SELECT_POLL). This implies that the receiving thread did not enter a blocked state. Thus, there was no context switch to hurt latency.
- The CPU usage in the select call is 70%
You can use this information to calculate the division of CPU usage between VMA and the application. For example when the CPU usage is 100%, 70% is used by VMA for polling the hardware, and the remaining 30% is used for processing the data by the application.
This example presents the most detailed vma_stats output.
- A single socket with user fd=14 was created
- The socket is a member of multicast group: 18.104.22.168
- Received 786133 packets, 1128530 Kilobytes via the socket during the last second
- No transmitted data
- All the traffic was offloaded. No packets were transmitted or received via the OS
- There were almost no missed Rx polls (see VMA_RX_POLL)
- There were no transmission or reception errors on this socket
- The sockets receive buffer size is 16777216 Bytes
- There were no dropped packets caused by the socket receive buffer limit (see VMA_RX_BYTES_MIN)
- Currently, one packet of 1470 Bytes is located in the socket receive queue
- The maximum number of packets ever located, simultaneously, in the sockets receive queue is 16
- No packets were dropped by the CQ
- No packets in the CQ ready queue (packets which were drained from the CQ and are waiting to be processed by the upper layers)
- The maximum number of packets drained from the CQ during a single drain cycle is 511 (see VMA_CQ_DRAIN_WCE_MAX)
- The RING_ETH received 786133 packets during this period
- The RING_ETH received 1192953 kilo bytes during this period. This includes headers bytes.
- 786137 interrupts were requested by the ring during this period
- 78613 interrupts were intercepted by the ring during this period
- The moderation engine was set to trigger an interrupt for every 10 packets and with maximum time of 181 usecs
- There were no retransmissions
- The current available buffers in the RX pool is 168000
- The current available buffers in the TX pool is 199488
- There were no buffer requests that failed (no buffer errors)
This example demonstrates how you can get multicast group membership information via vma_stats.
This is an example of the “netstat like” view of vma_stats (-v 5).
- Two processes are running VMA
- PID 1576 has one UDP socket bounded to all interfaces on port 44522
- PID 1618 has one TCP listener socket bounded to all interfaces on port 11111
This is an example of a log of socket performance counters along with an explanation of the results (using VMA_STATS_FILE parameter).
- No transmission or reception errors occurred on this socket (user fd=10).
- All the traffic was offloaded. No packets were transmitted or received via the OS.
- There were practically no missed Rx polls (see VMA_RX_POLL and VMA_SELECT_POLL). This implies that the receiving thread did not enter a blocked state. Thus, there was no context switch to hurt latency.
- There were no dropped packets caused by the socket receive buffer limit (see VMA_RX_BYTES_MIN). A single socket with user fd=14 was created.
This is an example of vma_stats fd dump utility of established TCP socket using log level = info.
- Fd 17 is a descriptor of established TCP socket (22.214.171.124:58795 -> 126.96.36.199:6666)
- Fd 17 is offloaded by VMA
- The current usage of the receive buffer is 0 bytes, while the max possible is 87380
- The connection (PCB) flags are TF_WND_SCALE and TF_NODELAY (PCB0x140)
- Window scaling is enabled, receive and send scales equal 7
- Congestion windows equal 1662014
- Unsent queue is empty
- There is a single packet of 14 bytes in the unacked queue (seqno 12678066)
- The last acknowledge sequence number is 12678066
Additional information about the values can be found in the VMA’s wiki page.
Use the VMA logs in order to trace VMA operations. VMA logs can be controlled by the VMA_TRACELEVEL variable. This variable's default value is 3, meaning that the only logs obtained are those with severity of PANIC, ERROR, and WARNING.
You can increase the VMA_TRACELEVEL variable value up to 6 (as described in VMA Configuration Parameters) to see more information about each thread's operation. Use the VMA_LOG_DETAILS=3 to add a time stamp to each log line. This can help to check the time difference between different events written to the log.
Use the VMA_LOG_FILE=/tmp/my_file.log to save the daily events. It is recommended to check these logs for any VMA warnings and errors. Refer to the Troubleshooting section to help resolve the different issues in the log.
VMA will replace a single '%d' appearing in the log file name with the pid of the process loaded with VMA. This can help in running multiple instances of VMA each with its own log file name.
When VMA_LOG_COLORS is enabled, VMA uses a color scheme when logging: Red for errors and warnings, and dim for low level debugs.
Use the VMA_HANDLE_SIGSEGV to print a backtrace if a segmentation fault occurs.
Look at the Ethernet counters (by using the ifconfig command) to understand whether the traffic is passing through the kernel or through the VMA (Rx and Tx).
For tcpdump to capture offloaded traffic (on ConnectX-4 and above), please follow instructions in section Offloaded Traffic Sniffer in the MLNX_OFED User Manual.
Look at the NIC counters to monitor HW interface level packets received and sent, drops, errors, and other useful information.
Peer Notification Service
Peer notification service handles TCP half-open connections where one side discovers the connection was lost but the other side still see it as active.
The peer-notification daemon is started at system initialization or manually under super user permissions.
The daemon collects information about TCP connections from all the running VMA processes. Upon VMA process termination (identified as causing TCP half open connection) the daemon notifies the peers (by sending Reset packets) in order to let them delete the TCP connections on their side.
This section lists problems that can occur when using VMA, and describes solutions for these problems.
High Log Level
This warning message indicates that you are using VMA with a high log level.
The VMA_TRACELEVEL variable value is set to 4 or more, which is good for troubleshooting but not for live runs or performance measurements.
Solution: Set VMA_TRACELEVEL to its default value 3.
On running an application with VMA, the following error is reported:
Solution: Check that libvma is properly installed, and that libvma.so is located in /usr/lib (or in /usr/lib64, for 64-bit machines).
On attempting to install vma rpm, the following error is reported:
Solution: Install the rpm with privileged user (root).
The following warning is reported:
Solution: When working with root, increase the maximum locked memory to 'unlimited' by using the following command:
When working as a non-privileged user, ask your administrator to increase the maximum locked memory to unlimited.
Lack of huge page resources in the system.The following warning is reported:
This warning message means that you are using VMA with huge page memory allocation enabled (VMA_MEM_ALLOC_TYPE=2), but not enough huge page resources are available in the system. VMA will use contiguous pages instead.
Note that VMA_MEM_ALLOC_TYPE= 1 is not supported while working with Microsoft hypervisor. In this case – please use VMA_MEM_ALLOC_TYPE= 0 (malloc).
If you want VMA to take full advantage of the performance benefits of huge pages, restart the application after adding more huge page resources to your system similar to the details in the warning message above, or try to free unused huge page shared memory segments with the script below.
If you are running multiple instances of your application loaded with VMA, you will probably need to increase the values used in the above example.
Check that your host machine has enough free memory after allocating the huge page resources for VMA. Low system memory resources may cause your system to hang.
Use "ipcs -m" and "ipcrm -m shmid" to check and clean unused shared memory segments.
Use the following script to release VMA unused huge page resources:
- Wrong ARP resolution when multiple ports are on the same network.
When two (or more) ports are configured on the same network (e.g. 192.168.1.1/24 and 192.168.1.2/24) VMA will only detect the MAC address of one of the interfaces. This will result in incorrect ARP resolution.
This is due to the way Linux handles ARP responses in this configuration. By default, Linux returns the same MAC address for both IPs. This behavior is called “ARP Flux”.
To fix this, it is required to change some of the interface’s settings:
To verify the issue is resolved, clear the ARP tables on a different server that is on the same network and use the arping utility to verify that each IP reports its own MAC address correctly:
- VMA process cannot establish connection with daemon (vmad) in Microsoft hypervisor environment.
When working with Microsoft Hypervisor, VMA daemon must be enabled in order to submit Traffic Control (TC) rules which will offload the traffic to the TAP device in case of plug-out events.
The following warning is reported during VMA startup:
The following warning is reported during any connection establishment/termination:
To fix this, run “vmad” as root.
- VMA cannot offload traffic when RoCE LAG is enabled.
RoCE LAG is a feature meant for mimicking Ethernet bonding for IB devices and is available for dual port cards only. When in RoCE LAG mode, instead of having an IB device per physical port (for example mlx5_0 and mlx5_1), only one IB device is present for both ports.
The following warning appears upon VMA startup:
RoCE LAG should be disabled in order for VMA to be able to offload traffic.
ConnectX-4 and above HCAs with MLNX_OFED print the following warning with instructions on how to disable RoCE LAG:
Device memory programming is not supported on VMs that lack Blue Flame support.
VMA will explicitly disable Device Memory capability if it detects Blue Flame support is missing on the node on which user application was launched using VMA. The following warning message will appear: