What can I help you with?
NVIDIA DPDK Documentation MLNX_DPDK_22.11_2310.5.1 LTS

Packet Re-Routing to the Kernel

In some cases, application may receive a packet that should have been received by the kernel. In this case, application uses the Kernel NIC Interface (KNI) or other means to transfer the packet to the kernel.

With bifurcated driver, we can have a rule to route packets matching a pattern (Example: IPv4 packets) to the DPDK application, while the rest of the traffic is received by the kernel. But, if we want to receive most of the traffic in DPDK except for specific patterns (Example: ICMP packets) that should be processed by the kernel, then it is easier to re-route these packets with a single rule.

This feature introduces a new rte_flow action (RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL) which allows application to re-route packets directly to the kernel without software involvement. The packets will be received by the kernel sharing the same device as the DPDK port on which this action is configured.

This action mostly suits bifurcated driver model.

  • The feature works only in isolated mode, due to priority manipulations in kernel module.

  • User should not insert broadcast/multicast rules in table group 0. This will cause the kernel module’s code to change priority and prevent this feature from working.

Copy
Copied!
            

flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800 type mask 0xffff / end actions send_to_kernel / end

Code Snippets

  1. Enable isolated mode on port

    Copy
    Copied!
                

    rte_flow_isolate(port_id, 1, &error)

  2. Create rule

    Copy
    Copied!
                

    struct rte_flow_error error;      struct rte_flow_attr attr = {              .group = 1,              .priority = 1,              .ingress = 1,              .transfer = 0      };         struct rte_flow_item pattern[] = {                 {                      .type = RTE_FLOW_ITEM_TYPE_ETH,              },              {                     /* Fill-in any matching criteria */                },               {                      .type = RTE_FLOW_ITEM_TYPE_END              }      };         struct rte_flow_action actions[] = {              {                      .type = RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL              },              {                      .type = RTE_FLOW_ACTION_TYPE_END              }      };      rte_flow_create(port_id, &attr, pattern, actions, &error);

Copy
Copied!
            

flow pattern_template 0 create ingress pattern_template_id 1 template eth / end flow actions_template 0 create ingress actions_template_id 1 template send_to_kernel / end mask send_to_kernel / end flow template_table 0 create table_id 1 group 1 priority 1 ingress rules_number 64 pattern_template 1 actions_template 1  flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone no pattern eth / end actions send_to_kernel / end

Code Snippets

Copy
Copied!
            

struct rte_flow_error err;   const struct rte_flow_pattern_template_attr attrp = {.ingress = 1}; struct rte_flow_item pattern[] = { [0] = {.type = RTE_FLOW_ITEM_TYPE_ETH}, [1] = {.type = RTE_FLOW_ITEM_TYPE_END}, };   struct rte_flow_pattern_template *pattern_template = rte_flow_pattern_template_create(port_id, &attrp, &pattern, &err);   /* egress and transfer are also supported. */  rte_flow_actions_template_attr attra = {.ingress = 1}; struct rte_flow_action act[] = { [0] = {.type = RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL}, [1] = {.type = RTE_FLOW_ACTION_TYPE_END}, }; struct rte_flow_action msk[] = { [0] = {.type = RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL}, [1] = {.type = RTE_FLOW_ACTION_TYPE_END,}, };   struct rte_flow_actions_template *actions_template = rte_flow_actions_template_create(port_id, &attra, &act, &msk, &err);   rte_flow_template_table_attr table_attr = { .flow_attr.ingress = 1, .flow_attr.group = 1, .nb_flows = 10000; };   uint8_t nb_pattern_templ = 1; struct rte_flow_pattern_template *pattern_templates[nb_pattern_templ]; pattern_templates[0] = pattern_template; uint8_t nb_actions_templ = 1; struct rte_flow_actions_template *actions_templates[nb_actions_templ]; actions_templates[0] = actions_template;   struct rte_flow_template_table *table = rte_flow_template_table_create(port_id, &table_attr, &pattern_templates, nb_pattern_templ, &actions_templates, nb_actions_templ, &err);   struct rte_flow_op_attr op_attr = {.postpone = 0};   struct rte_flow *flow = rte_flow_async_create(port_id, queue_id, table, pattern_template, 0, actions_template, 0, user_data, &err);


© Copyright 2024, NVIDIA. Last updated on Jan 9, 2025.