DOCA Documentation v3.2.0

TCP State Tracking

The TCP State feature in the DOCA Target Architecture provides a robust mechanism for tracking TCP flow parameters, specifically sequence and acknowledgment numbers, directly within the data plane of the BlueField device. The ability to monitor and maintain stateful information about TCP flows offloads this task that traditionally required host CPU processing. With the DPL NvTcpState extern object, developers can programmatically store and query TCP state for each flow at high speeds, enabling fine-grained management of network connections on the DPU.

tcp_ladder_states.png

Tracking TCP sequence and acknowledgment numbers in networking devices, especially within network interface controllers (NICs), empowers a wide range of advanced use cases:

  • Enables in-line enforcement of connection security and integrity by validating packet order, detecting replayed or duplicated segments, and preventing sequence-based attacks.​

  • Provides the basis for in-hardware filtering or offloading of application protocols, as flows can be programmatically distinguished by their connection phases (SYN, SYN-ACK, FIN, RST), supporting real-time acceleration, monitoring, and troubleshooting.​

  • Facilitates granular stateful firewalling, DDoS mitigation, and anomaly detection by mapping the bi-directional progress of TCP sessions and quickly identifying abnormal or terminated connections at line-rate without host intervention.​

By integrating TCP state tracking at the NIC level, operators gain a powerful tool for building scalable, high-performance network functions—from load balancers and firewalls to traffic steering and protocol offloading—while conserving host resources and reducing application latency.

This pipeline consists of a single control, with the following control level objects:

  • Three direct counters: for connection tracking, RX/TX forwarding, and ARP table accesses.​

  • A hardware-accelerated TCP state tracker (NvTcpState) for storing and updating TCP connection state information.​

Copy
Copied!
            

control c( inout nv_headers_t headers, in nv_standard_metadata_t std_meta, inout usermeta_conntrack_t user_meta, inout nv_empty_metadata_t pkt_out_meta ) { NvDirectCounter(NvCounterType.PACKETS_AND_BYTES) conntrack_counter; NvDirectCounter(NvCounterType.PACKETS_AND_BYTES) rx_tx_table_counter; NvDirectCounter(NvCounterType.PACKETS_AND_BYTES) arp_table_counter; NvTcpState(TABLE_SIZE) ct_tcp_state;

An ARP table is used to:

  • tracks ARP requests and responses, matching packets based on EtherType and ingress port, and forwarding them to the opposite side (uplink or VF) as appropriate.​

  • Includes per-entry counters and default no-op handling for unmatched ARP messages.

Copy
Copied!
            

action arp_forward(nv_logical_port_t port) { arp_table_counter.count(); nv_send_to_port(port); }   table arp_table { key = { std_meta.last_l2_ether_type : exact; std_meta.ingress_port : exact; } actions = { arp_forward; NoAction; } size = TABLE_SIZE; direct_counter = arp_table_counter; default_action = NoAction(); const entries = { (NV_TYPE_ARP, UPLINK_DPL_LOGICAL_PORT_ID): arp_forward(VF_DPL_LOGICAL_PORT_ID); (NV_TYPE_ARP, VF_DPL_LOGICAL_PORT_ID): arp_forward(UPLINK_DPL_LOGICAL_PORT_ID); } }

If the packet has TCP flags of FIN, RST or SYN-ACK, the packet is sent to a controller to insert or remove flow entries. The handshake packets are then reinjected to the pipeline by the controller, and are forwarded using a RX/TX table. This table

  • Defines actions to forward (rx_tx_forward) packets to a given port with accounting, or to drop (rx_tx_drop) a packet, both incrementing their respective counters.

  • Uses the packet's IPv4 source address to determine its forwarding. Its entries map well-known local/remote IPs to hardware port IDs, forwarding accordingly or dropping unmatched packets.​

  • By default drops unrecognized traffic

Copy
Copied!
            

action rx_tx_forward(nv_logical_port_t port) { rx_tx_table_counter.count(); nv_send_to_port(port); }   action rx_tx_drop() { rx_tx_table_counter.count(); nv_drop(); }   table rx_tx_table { key = { headers.ipv4.src_addr : exact; } actions = { rx_tx_forward; rx_tx_drop; NoAction; } size = TABLE_SIZE; direct_counter = rx_tx_table_counter; default_action = rx_tx_drop(); const entries = { (LOCAL_IP_ADDRESS): rx_tx_forward(UPLINK_DPL_LOGICAL_PORT_ID); (REMOTE_IP_ADDRESS): rx_tx_forward(VF_DPL_LOGICAL_PORT_ID); } }

The remaining TCP session packets are processed by the main Match/Action block consisting of a Conntrack Table, with:

  • An action (track_tcp_conn) that increments a connection tracking counter and updates the TCP state tracker with flow index and direction.​

  • A stateful table (conntrack_table) that matches on the IPv4 5-tuple and protocol for each TCP flow and applies either track_tcp_conn or no action.​

  • A per-entry direct counter for packet and byte statistics on each tracked connection.

The NvTcpState method store is called for all ongoing packets of the TCP session, which records the current SEQ/ACK numbers in stateful memory. The TCP state can be retrieved by the controller application via APIs, and are not available to be read by the data plane.

Copy
Copied!
            

action track_tcp_conn(bit<32> index, NvTcpFlowDirection direction) { conntrack_counter.count(); user_meta.conntrackOut = ct_tcp_state.store(index, direction); }   table conntrack_table { key = { headers.ipv4.src_addr : exact; headers.ipv4.dst_addr : exact; headers.ipv4.protocol : exact; headers.tcp.src_port : exact; headers.tcp.dst_port : exact; } actions = { track_tcp_conn; NoAction; } size = TABLE_SIZE; direct_counter = conntrack_counter; default_action = NoAction(); }

Finally, the TCP session data packets are sent to the rx_tx_tableto reached the desired destination.

See below for the complete DPL example.

Copy
Copied!
            

/* * SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. * SPDX-License-Identifier: LicenseRef-NvidiaProprietary * * NVIDIA CORPORATION, its affiliates and licensors retain all intellectual * property and proprietary rights in and to this material, related * documentation and any modifications thereto. Any use, reproduction, * disclosure or distribution of this material and related documentation * without an express license agreement from NVIDIA CORPORATION or * its affiliates is strictly prohibited. */   #include <doca_model.p4> #include <doca_headers.p4> #include <doca_externs.p4> #include <doca_parser.p4>   // The port number of VF/SF that is managed by the controller DPDK application. // The controller adds and removes entries from the conntrack table. #define CONTROLLER_DPL_LOGICAL_PORT_ID 2   // The port number of the uplink port. #define UPLINK_DPL_LOGICAL_PORT_ID 0   // The port number of the VF port. #define VF_DPL_LOGICAL_PORT_ID 1   #define LOCAL_IP_ADDRESS 32w0x63000008 // 99.0.0.8 #define REMOTE_IP_ADDRESS 32w0x63000016 // 99.0.0.22   #define TCP_FLAG_FIN_BIT_INDEX 0 #define TCP_FLAG_SYN_BIT_INDEX 1 #define TCP_FLAG_RST_BIT_INDEX 2 #define TCP_FLAG_PUSH_BIT_INDEX 3 #define TCP_FLAG_ACK_BIT_INDEX 4   struct usermeta_conntrack_t { bit<8> conntrackOut; };   #define TABLE_SIZE 128   // Pipeline section control c( inout nv_headers_t headers, in nv_standard_metadata_t std_meta, inout usermeta_conntrack_t user_meta, inout nv_empty_metadata_t pkt_out_meta ) { NvDirectCounter(NvCounterType.PACKETS_AND_BYTES) conntrack_counter; NvDirectCounter(NvCounterType.PACKETS_AND_BYTES) rx_tx_table_counter; NvDirectCounter(NvCounterType.PACKETS_AND_BYTES) arp_table_counter; NvTcpState(TABLE_SIZE) ct_tcp_state;   /******************************************************************************************** * Conntrack table */ action track_tcp_conn(bit<32> index, NvTcpFlowDirection direction) { conntrack_counter.count(); user_meta.conntrackOut = ct_tcp_state.store(index, direction); }   table conntrack_table { key = { headers.ipv4.src_addr : exact; headers.ipv4.dst_addr : exact; headers.ipv4.protocol : exact; headers.tcp.src_port : exact; headers.tcp.dst_port : exact; } actions = { track_tcp_conn; NoAction; } size = TABLE_SIZE; direct_counter = conntrack_counter; default_action = NoAction(); }   /******************************************************************************************** * RX/TX table */ action rx_tx_forward(nv_logical_port_t port) { rx_tx_table_counter.count(); nv_send_to_port(port); }   action rx_tx_drop() { rx_tx_table_counter.count(); nv_drop(); }   table rx_tx_table { key = { headers.ipv4.src_addr : exact; } actions = { rx_tx_forward; rx_tx_drop; NoAction; } size = TABLE_SIZE; direct_counter = rx_tx_table_counter; default_action = rx_tx_drop(); const entries = { (LOCAL_IP_ADDRESS): rx_tx_forward(UPLINK_DPL_LOGICAL_PORT_ID); (REMOTE_IP_ADDRESS): rx_tx_forward(VF_DPL_LOGICAL_PORT_ID); } }   /******************************************************************************************** * ARP table */ action arp_forward(nv_logical_port_t port) { arp_table_counter.count(); nv_send_to_port(port); }   table arp_table { key = { std_meta.last_l2_ether_type : exact; std_meta.ingress_port : exact; } actions = { arp_forward; NoAction; } size = TABLE_SIZE; direct_counter = arp_table_counter; default_action = NoAction(); const entries = { (NV_TYPE_ARP, UPLINK_DPL_LOGICAL_PORT_ID): arp_forward(VF_DPL_LOGICAL_PORT_ID); (NV_TYPE_ARP, VF_DPL_LOGICAL_PORT_ID): arp_forward(UPLINK_DPL_LOGICAL_PORT_ID); } }   /******************************************************************************************** * Control plane */ apply { user_meta.conntrackOut = 8w0;   if (std_meta.ingress_port == CONTROLLER_DPL_LOGICAL_PORT_ID) { // Packets transmitted from the controller: Send the packet to the desired destination. rx_tx_table.apply(); // execution ends here. } else { // Normal packets from network or VF.   // Handle ARP packets. arp_table.apply(); // on hit execution ends here.   // Handle valid TCP packets. if (headers.ipv4.isValid() && headers.tcp.isValid()) { // Check if the packet has a FIN, RST or SYN_ACK TCP flag. if ((headers.tcp.flags[TCP_FLAG_FIN_BIT_INDEX:TCP_FLAG_FIN_BIT_INDEX] == 1) || (headers.tcp.flags[TCP_FLAG_RST_BIT_INDEX:TCP_FLAG_RST_BIT_INDEX] == 1) || (headers.tcp.flags[TCP_FLAG_SYN_BIT_INDEX:TCP_FLAG_SYN_BIT_INDEX] == 1 && headers.tcp.flags[TCP_FLAG_ACK_BIT_INDEX:TCP_FLAG_ACK_BIT_INDEX] == 1)) { // Send the packet to the controller to add or remove entries. // Note: The controller is expected to send back the packet once the entries are updated. nv_send_to_port(CONTROLLER_DPL_LOGICAL_PORT_ID); // execution ends here. } else { conntrack_table.apply(); // execution does NOT end here. } }   // Send the packet to the desired destination. rx_tx_table.apply(); // execution ends here. } } }   // Instantiate the top-level DPU pipeline package NvDocaPipeline( nv_fixed_parser(), c() ) main;

© Copyright 2025, NVIDIA. Last updated on Nov 20, 2025