What can I help you with?
NVIDIA BlueField Platform Software Troubleshooting Guide

OVS-DOCA

This page offers troubleshooting information for OVS-DOCA users and customers.

NVIDIA OVS is built on upstream OVS (version 2.17.8), and supports all upstream commands and tools.

It is advisable to review the following resources:

Command

Explanation

systemctl status openvswitch

Checks the status of the Open vSwitch service on RPM-based OSs.

systemctl restart openvswitch

Restarts the Open vSwitch service on RPM-based OSs.

systemctl status openvswitch-switch

Checks the status of the Open vSwitch service on Debian-based OSs.

systemctl restart openvswitch-switch

Restarts the Open vSwitch service on Debian-based OSs.

ovs-vsctl show

Prints a brief overview of the Open vSwitch database contents.

ovs-vsctl list open_vswitch

Lists global information and settings of OVS, including DOCA/DPDK and OVS versions, and whether DOCA/DPDK mode is active.

ovs-vsctl list interface

Prints a list of all interfaces with detailed information.

ovs-vsctl list bridge

Prints a list of all bridges with detailed information.

dmesg

Prints Linux driver and firmware errors.

ovs-ofctl dump-flows <bridge>

Prints all OpenFlow entries in the bridge's tables.

ovs-appctl dpctl/dump-flows -m

Prints all active data path flows with counters and offload indications.

ovs-appctl dpctl/dump-conntrack -m

Prints all connections, including 5-tuple info, state, and offload status.

ovs-appctl dpif-netdev/pmd-stats-show

Prints DOCA/DPDK PMD (software) counters.

ovs-appctl dpif-netdev/pmd-stats-clear

Resets DOCA/DPDK PMD (software) counters.

ovs-vsctl set Open_vSwitch . other_config:max-idle=<msec>

Sets the data path flows aging time to the specified milliseconds.

ovs-appctl dpctl/offload-stats-show

Prints DOCA/DPDK offload counters, including the number of offloaded data path flows and connections.

ovs-vsctl set Open_vSwitch . other_config:hw-offload=false

Disables hardware offload (requires a service restart).

ovs-metrics

Live monitor of software and hardware counters.

ethtool -S <PF>

Obtains additional device statistics.

tcpdump -i <ib_device>

Dumps non-offloaded traffic from all representors controlled by the specified IB device.

ovs-appctl vlog/list

Prints current Open vSwitch logging levels.

ovs-appctl vlog/set <file:destination:lvl>

Controls Open vSwitch logging levels. Recommended settings for debugging DOCA/DPDK: dpif_netdev:file:DBG, netdev_offload_dpdk:file:DBG, ovs_doca:file:DBG, dpdk_offload_doca:file:DBG. Note: Logging levels revert to default on OVS service restart.

ovs-appctl dpif-netdev/dump-packets

Enables slowpath packet tracing in the Open vSwitch log.

ovs-appctl doca-pipe-group/dump

Dumps DOCA pipe groups, showing the created chains of masks for each group. Note: A pipe group is a chain of DOCA pipes, where a miss on one pipe leads to another, each with different masks and actions. Special predefined groups exist.

ovs-vsctl set Open_vSwitch . other_config:dpdk-offload-trace=true

Enables DPDK offload tracing (requires an OVS service restart). This setting will dump DPDK offloads directly for debugging purposes.

ovs-appctl dpdk/dump-offloads

If DPDK offload tracing is enabled, this command dumps DPDK offloads in DPDK RTE flow format, including dpctl flows and connection tracking offloads.

Log Files

The OVS log file is located on the DPU Arm side at:

Copy
Copied!
            

/var/log/openvswitch/ovs-vswitchd.log

Log levels can be configured for each component and each output option: console, syslog, and file.

By default, the logging settings are as follows:

  • Console: OFF

  • Syslog: ERR

  • File: INFO

To display the current logging level, run:

Copy
Copied!
            

# ovs-appctl vlog/list                  console    syslog    file                  -------    ------    ------ backtrace          OFF        ERR       INFO bfd                OFF        ERR       INFO bond               OFF        ERR       INFO bridge             OFF        ERR       INFO bundle             OFF        ERR       INFO bundles            OFF        ERR       INFO cfm                OFF        ERR       INFO collectors         OFF        ERR       INFO command_line       OFF        ERR       INFO connmgr            OFF        ERR       INFO conntrack          OFF        ERR       INFO conntrack_offload   OFF        ERR       INFO conntrack_tp       OFF        ERR       INFO coverage           OFF        ERR       INFO ct_dpif            OFF        ERR       INFO daemon             OFF        ERR       INFO daemon_unix        OFF        ERR       INFO dns_resolve        OFF        ERR       INFO dpdk               OFF        ERR       INFO dpdk_offload_doca   OFF        ERR       INFO dpif               OFF        ERR       INFO dpif_lookup_autovalidator   OFF        ERR       INFO dpif_lookup_generic   OFF        ERR       INFO dpif_mfex_extract_study   OFF        ERR       INFO dpif_netdev        OFF        ERR       INFO dpif_netdev_extract   OFF        ERR       INFO dpif_netdev_impl   OFF        ERR       INFO dpif_netdev_lookup   OFF        ERR       INFO dpif_netlink       OFF        ERR       INFO dpif_netlink_rtnl   OFF        ERR       INFO dpif_offload       OFF        ERR       INFO dpif_offload_netdev   OFF        ERR       INFO dpif_offload_netlink   OFF        ERR       INFO entropy            OFF        ERR       INFO fail_open          OFF        ERR       INFO fatal_signal       OFF        ERR       INFO flow               OFF        ERR       INFO hmap               OFF        ERR       INFO id_refmap          OFF        ERR       INFO in_band            OFF        ERR       INFO ipf                OFF        ERR       INFO ipfix              OFF        ERR       INFO jsonrpc            OFF        ERR       INFO lacp               OFF        ERR       INFO lldp               OFF        ERR       INFO lldpd              OFF        ERR       INFO lldpd_structs      OFF        ERR       INFO lockfile           OFF        ERR       INFO memory             OFF        ERR       INFO meta_flow          OFF        ERR       INFO native_tnl         OFF        ERR       INFO netdev             OFF        ERR       INFO netdev_dpdk        OFF        ERR       INFO netdev_dpdk_vdpa   OFF        ERR       INFO netdev_dummy       OFF        ERR       INFO netdev_linux       OFF        ERR       INFO netdev_offload     OFF        ERR       INFO netdev_offload_dpdk   OFF        ERR       INFO netdev_offload_tc   OFF        ERR       INFO netdev_vport       OFF        ERR       INFO netflow            OFF        ERR       INFO netlink            OFF        ERR       INFO netlink_conntrack   OFF        ERR       INFO netlink_notifier   OFF        ERR       INFO netlink_socket     OFF        ERR       INFO nx_match           OFF        ERR       INFO odp_execute        OFF        ERR       INFO odp_util           OFF        ERR       INFO offload_metadata   OFF        ERR       INFO ofp_actions        OFF        ERR       INFO ofp_bundle         OFF        ERR       INFO ofp_connection     OFF        ERR       INFO ofp_errors         OFF        ERR       INFO ofp_flow           OFF        ERR       INFO ofp_group          OFF        ERR       INFO ofp_match          OFF        ERR       INFO ofp_meter          OFF        ERR       INFO ofp_monitor        OFF        ERR       INFO ofp_msgs           OFF        ERR       INFO ofp_packet         OFF        ERR       INFO ofp_port           OFF        ERR       INFO ofp_protocol       OFF        ERR       INFO ofp_queue          OFF        ERR       INFO ofp_table          OFF        ERR       INFO ofp_util           OFF        ERR       INFO ofproto            OFF        ERR       INFO ofproto_dpif       OFF        ERR       INFO ofproto_dpif_mirror   OFF        ERR       INFO ofproto_dpif_monitor   OFF        ERR       INFO ofproto_dpif_rid   OFF        ERR       INFO ofproto_dpif_upcall   OFF        ERR       INFO ofproto_dpif_xlate   OFF        ERR       INFO ofproto_xlate_cache   OFF        ERR       INFO ovs_doca           OFF        ERR       INFO ovs_lldp           OFF        ERR       INFO ovs_numa           OFF        ERR       INFO ovs_rcu            OFF        ERR       INFO ovs_replay         OFF        ERR       INFO ovs_router         OFF        ERR       INFO ovs_thread         OFF        ERR       INFO ovsdb_cs           OFF        ERR       INFO ovsdb_error        OFF        ERR       INFO ovsdb_idl          OFF        ERR       INFO ox_stat            OFF        ERR       INFO pcap               OFF        ERR       INFO pmd_perf           OFF        ERR       INFO poll_loop          OFF        ERR       INFO process            OFF        ERR       INFO rconn              OFF        ERR       INFO reconnect          OFF        ERR       INFO refmap             OFF        ERR       INFO route_table        OFF        ERR       INFO rstp               OFF        ERR       INFO rstp_sm            OFF        ERR       INFO sflow              OFF        ERR       INFO signals            OFF        ERR       INFO socket_util        OFF        ERR       INFO socket_util_unix   OFF        ERR       INFO stp                OFF        ERR       INFO stream             OFF        ERR       INFO stream_fd          OFF        ERR       INFO stream_replay      OFF        ERR       INFO stream_ssl         OFF        ERR       INFO stream_tcp         OFF        ERR       INFO stream_unix        OFF        ERR       INFO svec               OFF        ERR       INFO system_stats       OFF        ERR       INFO tc                 OFF        ERR       INFO timeval            OFF        ERR       INFO tunnel             OFF        ERR       INFO unixctl            OFF        ERR       INFO userspace_tso      OFF        ERR       INFO util               OFF        ERR       INFO uuid               OFF        ERR       INFO vconn              OFF        ERR       INFO vconn_stream       OFF        ERR       INFO vlog               OFF        ERR       INFO vswitchd           OFF        ERR       INFO xenserver          OFF        ERR       INFO

To set logging levels, run:

Copy
Copied!
            

# ovs-appctl vlog/set <file:destination:lvl>

The recommended settings to debug DOCA/DPDK are as follows:

Copy
Copied!
            

# ovs-appctl vlog/set dpif_netdev:file:DBG netdev_offload_dpdk:file:DBG ovs_doca:file:DBG dpdk_offload_doca:file:DBG

Note

The logging levels revert to their default settings after restarting the OVS service.


OpenFlow Table Dump

You can dump OpenFlow tables with the following command:

Copy
Copied!
            

# ovs-ofctl dump-flows <bridge>

Each OpenFlow has the following counters:

  • Number of packets that have matched the OpenFlow rule

  • Number of bytes that have matched the OpenFlow rule

  • Time, in seconds, since the OpenFlow rule was installed

Example:

Copy
Copied!
            

# ovs-ofctl dump-flows br-int   cookie=0x0, duration=65.630s, table=0, n_packets=4, n_bytes=234, arp actions=NORMAL  cookie=0x0, duration=65.622s, table=0, n_packets=20, n_bytes=1960, icmp actions=NORMAL  cookie=0x0, duration=65.605s, table=0, n_packets=0, n_bytes=0, ct_state=-trk,ip actions=ct(table=1,zone=5)  cookie=0x0, duration=65.562s, table=1, n_packets=0, n_bytes=0, ct_state=+new+trk,ip actions=ct(commit,zone=5),NORMAL  cookie=0x0, duration=65.554s, table=1, n_packets=0, n_bytes=0, ct_state=+est+trk,ct_zone=5,ip actions=NORMAL


DataPath Flow Dump

You can dump existing DataPath flows using this command:

Copy
Copied!
            

# ovs-appctl dpctl/dump-flows -m

Each DataPath flow provides the following information:

  • Specification (match criteria)

  • Actions applied

  • Offloaded indication

The DataPath types include:

  • OVS

  • DOCA

  • DPDK

  • TC

Additional details include:

  • Number of packets that have matched this DataPath flow (including partial offloads)

  • Number of bytes that have matched this DataPath flow (including partial offloads)

  • Number of packets partially handled in hardware by this DataPath flow

  • Number of bytes partially handled in hardware by this DataPath flow

  • Time, in seconds, since the last use of the flow

Example:

Copy
Copied!
            

#  ovs - appctl dpctl / dump - flows - m flow - dump from pmd on cpu core : 21 ufid : c79d3e57 - 10eb - 427f - a5d3 - 2785f0cbbac1, skb_priority(0 / 0), tunnel(tun_id = 0x2a, src = 7.7.7.8, dst = 7.7.7.7, ttl = 64 / 0, eth_src = 10 : 70 : fd:d9 : 0d : a4 / 00 : 00 : 00 : 00 : 00 : 00, eth_dst = 10 : 70 : fd:d9 : 0d : c8 / 00 : 00 : 00 : 00 : 00 : 00, type = gre / none, flags(-df + key)), skb_mark(0 / 0), ct_state(0 / 0), ct_zone(0 / 0), ct_mark(0 / 0), ct_label(0 / 0), recirc_id(0), dp_hash(0 / 0), in_port(gre_sys), packet_type(ns = 0, id = 0), eth(src = c2 : 32 : df : 66 : 71 : af, dst = e4 : 73 : 41 : 08 : 00 : 02), eth_type(0x0800), ipv4(src = 1.1.1.8 / 0.0.0.0, dst = 1.1.1.7 / 0.0.0.0, proto = 1, tos = 0 / 0, ttl = 64 / 0, frag = no), icmp(type = 0 / 0, code = 0 / 0), packets : 1, bytes : 144, used : 1.488s, offloaded : yes, dp : doca, actions : pf0vf0, dp - extra - info : miniflow_bits(9, 1)


PMD Counters

You can dump the OVS software processing counters using this command:

Copy
Copied!
            

# ovs-appctl dpif-netdev/pmd-stats-show

The "packets received" counter shows the number of packets processed by software and can be used to monitor issues related to hardware offloads.

Example:

Copy
Copied!
            

# ovs-appctl dpif-netdev/pmd-stats-show  pmd thread numa_id 0 core_id 21:   packets received: 75   packet recirculations: 14   avg. datapath passes per packet: 1.19   phwol hits: 5   mfex opt hits: 0   simple match hits: 0   emc hits: 0   smc hits: 0   megaflow hits: 28   avg. subtable lookups per megaflow hit: 1.82   miss with success upcall: 56   miss with failed upcall: 0   avg. packets per output batch: 1.02   idle cycles: 7405350461306 (100.00%)   processing cycles: 16284620 (0.00%)   avg cycles per packet: 98738223279.01 (7405366745926/75)   avg processing cycles per packet: 217128.27 (16284620/75)

To reset these statistics, use this command:

Copy
Copied!
            

# ovs-appctl dpif-netdev/pmd-stats-clear


Offload Counters

To dump the OVS offload counters, use the following command:

Copy
Copied!
            

# ovs-appctl dpctl/offload-stats-show

The following counters are exposed using this command:

  • Enqueued offloads: Number of pending rules in HW (including infrastructure rules)

  • Inserted offloads: Number of offloaded rules in HW (including infrastructure rules)

  • CT uni-dir Connections: CT uni-dir Connections in HW

  • CT bi-dir Connections: CT bi-dir Connections in HW

Example:

Copy
Copied!
            

# ovs-appctl dpctl/offload-stats-show HW Offload stats:      Total                 Enqueued offloads:       0      Total                 Inserted offloads:      42      Total            CT uni-dir Connections:       0      Total             CT bi-dir Connections:       1      Total   Cumulative Average latency (us):  102761      Total    Cumulative Latency stddev (us):  131560      Total  Exponential Average latency (us):  125942      Total   Exponential Latency stddev (us):  132435


Metrics

This command provides a live dump of both software and hardware counters:

Copy
Copied!
            

# ovs-metrics

Note

The ovs-metrics script requires the python3-doca-openvswitch package. To install it:

  • On Ubuntu: sudo apt install python3-doca-openvswitch

  • On RHEL: sudo yum install python3-doca-openvswitch

The ovs-metrics tool dumps the following information every second:

  • sw-pkts: number of packets passed in SW (total)

  • sw-pps: last second packet per second in SW

  • sw-conns: number of CT connections in SW

  • sw-cps: last second new connections per second in SW

  • hw-pkts: number of packets passed in HW (total)

  • hw-pps: last second packet per second in HW

  • hw-conns: number of CT connections in HW

  • hw-cps: last second new connections per second in HW

  • enqueued: number of rules pending HW offload

  • hw-rules: Number of offloaded rules in HW (including infrastructure rules)

  • hw-rps: last second new HW rules offloaded per second

DOCA Group Pipe Dump

The following command provides a dump of DOCA pipe groups, showing each created group and inner pipe structure:

Copy
Copied!
            

# ovs-appctl doca-pipe-group/dump

Example Output:

Copy
Copied!
            

esw_mgr_port_id = 0, group_id = 0x00000000 esw = 0x7fd9be8fe048, group_id = 0x00000000, priority = 2, fwd.type = port, match.parser_meta.port_meta[4, changeable] = 0xffffffff / 0xffffffff, match.parser_meta.outer_ip_fragmented[1, changeable] = 0xff / 0xff, match.outer.eth.type[2, changeable] = 0xffff / 0xffff, match.outer.l3_type[4, specific] = 0x02000000 / 0x02000000, empty_actions_mask esw = 0x7fd9be8fe048, group_id = 0x00000000, priority = 4, fwd.type = pipe, empty_match, empty_actions_mask esw_mgr_port_id = 0, group_id = 0xfd000000(post - ct) esw = 0x7fd9be8fe048, group_id = 0xfd000000(post - ct), priority = 4, fwd.type = pipe, empty_match, empty_actions_mask esw_mgr_port_id = 0, group_id = 0xff000000(post - meter) esw = 0x7fd9be8fe048, group_id = 0xff000000(post - meter), priority = 4, fwd.type = pipe, empty_match, empty_actions_mask esw_mgr_port_id = 0, group_id = 0xf2000000(sample - post - mirror) esw = 0x7fd9be8fe048, group_id = 0xf2000000(sample - post - mirror), priority = 1, fwd.type = drop, match.outer.eth.type[2, changeable] = 0xffff / 0x8809, empty_actions_mask esw = 0x7fd9be8fe048, group_id = 0xf2000000(sample - post - mirror), priority = 3, fwd.type = pipe, empty_match, actions.meta.pkt_meta[4, changeable] = 0xffffffff / 0x00f0ffff esw_mgr_port_id = 0, group_id = 0xf1000000(sample)esw_mgr_port_id = 0, group_id = 0xf3000000(miss)esw_mgr_port_id = 0, group_id = 0xfb000000(post - hash) esw = 0x7fd9be8fe048, group_id = 0xfb000000(post - hash), priority = 4, fwd.type = pipe, empty_match, empty_actions_mask

This command displays the created groups, where each group is identified by a group ID and includes a list of DOCA flow pipes arranged in a chain (with misses leading from one pipe to the next) and sorted by priority. These pipe groups are shown in the order they were created. Special group IDs are labeled (e.g., post-hash). The dump also includes the forwarding type for each pipe and any header rewrite actions, if applicable.

The Open vSwitch package is compiled with libunwind support, so if a crash occurs, a backtrace should be recorded in the log for the relevant binary that crashed.

For example, if ovs-vswitchd crashes, the backtrace will be logged in /var/log/openvswitch/ovs-vswitchd.log.

To ensure core dumps are generated upon a crash, you need to configure the system's ulimit and set suid_dumpable.

Copy
Copied!
            

# ulimit -c unlimited # sysctl -w fs.suid_dumpable=1

To debug the core dump effectively, you need to install a debug info package that includes the binary symbols.

For RPM-based distributions:

Copy
Copied!
            

# dnf install openvswitch-debuginfo

For Debian-based distributions:

Copy
Copied!
            

# apt install openvswitch-dbg

Failure to Start OVS

If OVS fails to start after enabling DOCA mode, it is often due to missing hugepage configuration.

Check the OVS log file (/var/log/openvswitch/ovs-vswitchd.log) for further details on the failure.

When hugepages are not configured, you may encounter the following error:

Copy
Copied!
            

2024-03-13T14:59:26.806Z|00025|dpdk|ERR|EAL: Cannot get hugepage information.

5.2. Failure to Add Port to Bridge

Issues with adding a port to a bridge can stem from several causes, such as:

  1. DOCA/DPDK not being initialized. In this case, you might encounter the following error:

    Copy
    Copied!
                

    error: "could not open network device pf0 (Address family not supported by protocol)"

    To resolve this issue:

    1. Run:

      Copy
      Copied!
                  

      ovs-vsctl set o . other_config:doca-init=true 

    2. Restart OVS.

  2. The eSwitch manager was not added to OVS, in this case the following error will occur:

    Copy
    Copied!
                

    error: "could not add network device pf0vf0 to ofproto (Resource temporarily unavailable)"

    To resolve this issue:

    Add the eSwitch manager (PF) port to the OVS bridge.

  3. A port type of DPDK is added to a bridge without setting the datapath_type to netdev:

    1. To use DPDK (DOCA) ports, the bridge's datapath_type must be set to netdev. Otherwise, the following error will occur:

      Copy
      Copied!
                  

      error: "could not add network device eth2 to ofproto (Invalid argument)"

    2. To verify the bridge's datapath_type, use the command:

      Copy
      Copied!
                  

      ovs-vsctl get bridge <BR> datapath_type

  4. Adding a non-existent port to OVS results in the following error:

    Copy
    Copied!
                

    error: "rep1: could not set configuration (No such device)"

5.3. Traffic Failure

Failure to pass traffic between interfaces could be due to the following reasons:

  1. Port not added successfully: In this case, refer to Failure to Add Port to Bridge.

  2. Incorrect VFs subnet: If traffic is being sent between two different interfaces over different subnets then traffic will not work unless explicit OpenFlow rules are configured to allow such traffic.

  3. Conflicting kernel routing tables: Ensure there are no conflicts in the kernel's routing table - each unique IP address should only be routed through a single interface.

  4. Relevant representors not added to the OVS bridge: If a VF's representor is not added to the OVS bridge, traffic from that VF will not be seen by the OVS.

  5. Tunnels:

    1. No neighbour discovery between tunnel endpoints: For tunneled traffic to function, there must be ping connectivity established between the tunnel endpoints.

      1. Ensure the OVS bridge has the tunnel IP.

      2. Ensure the remote endpoint have an interface with the remote tunnel IP.

    2. The same VNI should be configured between the local and remote systems.

Performance Degradation (No Offload)

Performance degradation may suggest that Open vSwitch (OVS) is not being offloaded.

  • Verify offload status. Run:

    Copy
    Copied!
                

    # ovs-vsctl get Open_vSwitch . other_config:hw-offload

    • If hw-offload = true – Fast Pass is enabled (desired result).

    • If hw-offload = false – Slow Pass is enabled.

To Enable Offloading (i f hw-offload = false)

  • For RHEL/CentOS, run:

    Copy
    Copied!
                

    # ovs-vsctl set Open_vSwitch . other_config:hw-offload=true # systemctl restart openvswitch # systemctl enable openvswitch

  • For Ubuntu/Debian:

    Copy
    Copied!
                

    # ovs-vsctl set Open_vSwitch . other_config:hw-offload=true # systemctl restart openvswitch-switch

Check Offload Status of Rules

  • Run ovs-appctl dpctl/dump-flows -m and verify that dp:ovs is not present in the rule.

    • If dp:ovs is present, check the OVS logs to determine why offloading failed.

  • Ensure no packets are being processed by software by reviewing "PMD counters."

Degradation might also result from recent changes related to connection tracking zones (ct-zones) or memory zone (mem-zones) requirements.

Details:

  • Increased Support for Connection Tracking Zones: The system now supports up to 65,535 ct-zones, a significant increase from the previous limit of 255.

  • Basic Pipes Implementation: Transitioning to basic pipes has increased the number of mem-zones required per ct-zone to approximately 36, four times more than before.

Reaching Maximum Number of Memory Zones

Due to the increased mem-zone requirement per ct-zone, users may more easily encounter the maximum number of mem-zones (default set at 2560) when configuring a large number of ct-zones.

Error in Logs:

If you hit the max mem-zones limit, you will encounter the following error in the logs: 2024-07-30T19:17:07.585Z|00002|dpdk(hw_offload4)|ERR|EAL: memzone_reserve_aligned_thread_unsafe(): Number of requested memzone segments exceeds max memzone segments (2560 >= 2560)

Workaround

To mitigate this issue, configure more mem-zones using the following command:

Copy
Copied!
            

ovs-vsctl set o . other_config:dpdk-max-memzones=<desired_number>

Replace <desired_number> with the appropriate number of mem-zones required for your configuration.

Example Scenario

Scenario: You are configuring 500 ct-zones. Given that each ct-zone now requires approximately 36 mem-zones, you will need a total of 18,000 mem-zones (500 ct-zones * 36 mem-zones/ct-zone). It is prefered to preserve extra room for other pipes using mem-zones (preserve the default 2560 mem-zones for example).

Steps:

  1. Calculate the required number of mem-zones:

    Copy
    Copied!
                

    500 ct-zones * 36 mem-zones/ct-zone + 2560 Reserva = 20,560 mem-zones 

  2. Set the desired number of mem-zones using the command:

    Copy
    Copied!
                

    ovs-vsctl set o . other_config:dpdk-max-memzones=20560 

By following this notice, you can effectively manage and mitigate degradation issues related to the increased support for connection tracking zones and the associated memory zone requirements.

© Copyright 2024, NVIDIA. Last updated on Nov 12, 2024.