NVIDIA DPDK Documentation MLNX_DPDK_22.11_2310.5.1 LTS

Port Affinity Match

Port Affinity Match indicates in the DPDK level the physical port a packet belongs to. This capability is available by using a new pattern item type for aggregated port affinity, its value reflects the physical port affinity of the received packets. ​

Additionally, the tx_affinity setting was added by calling rte_eth_dev_map_aggr_tx_affinity(), its value reflects the physical port the packets will be sent to.

This new capability is enables the app to receive the ingress port of a packet, and send the ACK out on the same port when dual ports devices are configured as a bond in Linux.

  1. Configure the TxQ index 0,1 with tx affinity 1, TxQ index 2,3 with tx affinity 2:

    Copy
    Copied!
                

    testpmd> port stop all testpmd> port config 0 txq 0 affinity 1 testpmd> port config 0 txq 1 affinity 1 testpmd> port config 0 txq 2 affinity 2 testpmd> port config 0 txq 3 affinity 2 testpmd> port start all

  2. Create an RX rule that matches the port affinity values, and redirect the packets to different RX Queue with marked value:

    Copy
    Copied!
                

    testpmd> flow create 0 ingress pattern eth / aggr_affinity affinity is 1 / end actions mark id 0x1234 / queue index 0 / end testpmd> flow create 0 ingress pattern eth / aggr_affinity affinity is 2 / end actions mark id 0xabcd / queue index 2 / end

  3. Send the packet from the different hardware ports of the remote side.

    The packet was received from the first hardware port, the DPDK SW received at the RX queue 0 with 0x1234 Marked value, and the packets of RX queue 1 with 0xabcd marked value showed the second hardware port received.

  1. Configure affinity in TxQ:

    Copy
    Copied!
                

    ret = rte_eth_dev_count_aggr_ports(port_id); if (ret < 0) { printf("Failed to count the aggregated ports: (%s)\n", strerror(-ret)); return; } ret = rte_eth_dev_map_aggr_tx_affinity(port_id, queue_id, affinity); if (ret != 0) { printf("Failed to map tx queue with an aggregated port: %s\n", rte_strerror(-ret)); return; }

  2. Create an RX rule that matches the port affinity values, and redirect the packets to different RX Queue with marked value:

    Copy
    Copied!
                

    struct rte_flow * generate_port_affinity_match_flow( uint16_t port_id, uint8_t affinity_value, struct rte_flow_error *error) {   struct rte_flow_attr attr; struct rte_flow_action action[MAX_ACTION_NUM]; struct rte_flow *flow = NULL; struct rte_flow_item_eth eth = {{{0}}}; struct rte_flow_item_aggr_ port_affinity affinity = {0}; struct rte_flow_item_aggr_ port_affinity affinity_mask = {.affinity=0xff};   struct rte_flow_item pattern[] = { { .type = RTE_FLOW_ITEM_TYPE_ETH, .spec = NULL, .mask = NULL, }, { .type = RTE_FLOW_ITEM_TYPE_AGGR_ PORT_AFFINITY, .spec = &affinity, .mask = &affinity_mask, }, { .type = RTE_FLOW_ITEM_TYPE_END, }, }; struct rte_flow_action_queue queue = { .index = 1, };   /* Set the request affinity value. */ affinity.affinity = affinity_value;   /* Create the Queue action. */ action[0].type = RTE_FLOW_ACTION_TYPE_QUEUE; action[1].conf = &queue; action[1].type = RTE_FLOW_ACTION_TYPE_END; attr.ingress = 1;   /* the final level must be always type end */ pattern[1].type = RTE_FLOW_ITEM_TYPE_END; flow = rte_flow_create(port_id, &attr, pattern, action, error); return flow;

© Copyright 2024, NVIDIA. Last updated on Jan 9, 2025.