What can I help you with?
NVIDIA DPDK Documentation MLNX_DPDK_22.11_2310.5.1 LTS

NVIDIA BlueField-2 Host to Arm Flow Control

This new capability enables NVIDIA BlueField-2 Host to Arm Flow Control by using an RX queue available descriptors threshold and host shaper.

For more information on how they interact to throttle Bluefield-2 host to Arm traffic, please refer to https://doc.dpdk.org/guides/nics/mlx5.html

Note

In mlx5 PMD, LWM is interchangeable with the available descriptors threshold.

To set port 1 rxq 0 available descriptors threshold to 30% of RX queue size:

Copy
Copied!
            

testpmd> set port 1 rxq 0 avail_thresh 30

To disable available descriptors threshold on port 1 rxq 0:

Copy
Copied!
            

testpmd> set port 1 rxq 0 avail_thresh 0

To remove the current host shaper and enable available descriptors threshold triggered mode on port 1:

Copy
Copied!
            

testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 1 rate 0

To remove the current host shaper and disable available descriptors threshold triggered mode on port 1:

Copy
Copied!
            

testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 0

To configure an immediate host shaper rate of 1Gbps and disable available descriptors threshold triggered mode on port 1 (note the rate unit is 100Mbps):

Copy
Copied!
            

testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 10

Copy
Copied!
            

/* Enable host shaper avail-thresh-triggered mode. */ rte_pmd_mlx5_config_host_shaper(port_id, 1, RTE_BIT32(MLX5_HOST_SHAPER_FLAG_AVAIL_THRESH_TRIGGERED));   /* Disable host shaper avail-thresh-triggered mode and set an immediate rate. */ rte_pmd_mlx5_config_host_shaper(port_id, rate, 0);   static uint8_t host_shaper_avail_thresh_triggered[RTE_MAX_ETHPORTS]; #define SHAPER_DISABLE_DELAY_US 100000 /* 100ms */   /* The delay handler to re-arm avail-thresh event and disable host shaper */ static void mlx5_test_host_shaper_disable(void *args) { uint32_t port_rxq_id = (uint32_t)(uintptr_t)args; uint16_t port_id = port_rxq_id & 0xffff; uint16_t qid = (port_rxq_id >> 16) & 0xffff; struct rte_eth_rxq_info qinfo;   printf("%s disable shaper\n", __func__); if (rte_eth_rx_queue_info_get(port_id, qid, &qinfo)) { printf("rx_queue_info_get returns error\n"); return; } /* Rearm the available descriptor threshold event. */ if (rte_eth_rx_avail_thresh_set(port_id, qid, qinfo.avail_thresh)) { printf("config avail_thresh returns error\n"); return; } /* Only disable the shaper when avail_thresh_triggered is set. */ if (host_shaper_avail_thresh_triggered[port_id] && rte_pmd_mlx5_host_shaper_config(port_id, 0, 0)) printf("%s disable shaper returns error\n", __func__); }   void mlx5_test_avail_thresh_event_handler(uint16_t port_id, uint16_t rxq_id) { struct rte_eth_dev_info dev_info; uint32_t port_rxq_id = port_id | (rxq_id << 16);   /* Ensure it's MLX5 port. */ if (rte_eth_dev_info_get(port_id, &dev_info) != 0 || (strncmp(dev_info.driver_name, "mlx5", 4) != 0)) return; rte_eal_alarm_set(SHAPER_DISABLE_DELAY_US, mlx5_test_host_shaper_disable, (void *)(uintptr_t)port_rxq_id); printf("%s port_id:%u rxq_id:%u\n", __func__, port_id, rxq_id); }   /* The avail_thresh event handler */ static int eth_event_callback(portid_t port_id, enum rte_eth_event_type type, void *param, void *ret_param) { uint16_t rxq_id; int ret;   if (type == RTE_ETH_EVENT_RX_AVAIL_THRESH) { /* avail_thresh query API rewinds rxq_id, no need to check max RxQ num */ for (rxq_id = 0; ; rxq_id++) { ret = rte_eth_rx_avail_thresh_query(port_id, &rxq_id, NULL); if (ret <= 0) break; printf("Received avail_thresh event, port: %u, rxq_id: %u\n", port_id, rxq_id);   mlx5_test_avail_thresh_event_handler(port_id, rxq_id); } } }   /* Configure avail_thresh to 30% of RX queue. */ rte_eth_rx_avail_thresh_set(port_id, qid, 30);   /* Register avail_thresh event handler. */ rte_eth_dev_callback_register(port_id, RTE_ETH_EVENT_RX_AVAIL_THRESH, eth_event_callback, void *cb_arg)

© Copyright 2024, NVIDIA. Last updated on Jan 9, 2025.