Task#

Concurrent task scheduling and execution system for real-time, multi-threaded applications.

Overview#

The Task library provides a high-performance framework for concurrent task execution with dependency management, worker thread pools, and flexible scheduling. It simplifies complex real-time threading code with a worker queue architecture that is both scalable and deterministic.

Key Features#

  • Task Graphs: Define complex task dependencies with automatic execution ordering

  • Worker Thread Pools: Configurable worker threads with CPU core pinning and priority scheduling

  • Task Categories: Organize tasks by category and assign workers to specific categories

  • Periodic Triggers: High-precision periodic task execution with nanosecond latencies

  • Memory Triggers: Monitor memory locations and execute callbacks on events

  • Deterministic Execution: Lock-free and allocation-free critical paths for predictable real-time performance

  • Task Timeouts: Configurable execution time limits to guarantee real-time operation

  • Cancellation Support: Cooperative task cancellation with CancellationToken

  • Performance Monitoring: Detailed execution statistics and Chrome trace output

Quick Start#

Creating and Executing a Task#

// Create a simple task that executes a function
std::atomic<int> counter{0};

auto task = TaskBuilder("simple_task")
                    .function([&counter]() -> TaskResult {
                        counter++;
                        return TaskResult{TaskStatus::Completed};
                    })
                    .build_shared();

// Execute the task
const auto result = task->execute();

Tasks with Timeout#

auto task = TaskBuilder("timed_task")
                    .function([]() -> TaskResult {
                        std::this_thread::sleep_for(50ms);
                        return TaskResult{TaskStatus::Completed};
                    })
                    .timeout(100ms)
                    .build_shared();

Tasks with Cancellation#

// Define a cancellable work function
auto cancellable_work = [](const TaskContext &ctx) -> TaskResult {
    for (int i = 0; i < 10; i++) {
        if (ctx.cancellation_token->is_cancelled()) {
            return TaskResult{TaskStatus::Cancelled, "Cancelled by user"};
        }
        std::this_thread::sleep_for(10ms);
    }
    return TaskResult{TaskStatus::Completed};
};

auto task = TaskBuilder("cancellable_task").function(cancellable_work).build_shared();

Core Concepts#

Task Basics#

A Task is the fundamental unit of work in the task system. Each task encapsulates:

  • A function to execute (lambda or callable)

  • Optional timeout duration

  • Cancellation token for cooperative cancellation

  • Task metadata (name, category, status)

  • Dependency relationships with other tasks

Tasks are created using the TaskBuilder fluent interface and can be executed directly or scheduled through a TaskScheduler.

Task Graphs#

TaskGraph provides a fluent API for building complex task dependency graphs. There are two approaches for creating task graphs:

Single-Task Graphs#

For simple workflows with one task:

std::atomic<int> counter{0};

// Create a task graph with a single task
auto graph = TaskGraph::create("simple_graph")
                     .single_task("increment")
                     .function([&counter]() { counter++; })
                     .build();

Multi-Task Graphs with Dependencies#

For complex workflows with multiple tasks and dependencies:

std::atomic<int> step{0};

TaskGraph graph("dependency_graph");

// Create tasks with dependencies
auto grandparent =
        graph.register_task("grandparent").function([&step]() { step.store(1); }).add();

auto parent = graph.register_task("parent")
                      .depends_on(grandparent)
                      .function([&step]() { step.store(2); })
                      .add();

graph.register_task("child").depends_on(parent).function([&step]() { step.store(3); }).add();

// Build the graph to finalize dependencies
graph.build();

Dependencies are expressed by name, and TaskGraph automatically determines execution order based on dependency chains. Tasks with no unmet dependencies execute immediately, while dependent tasks wait for their parents to complete.

Task Scheduler#

TaskScheduler manages worker threads that execute tasks from one or more task graphs. The scheduler supports:

  • Multiple worker threads with configurable counts

  • CPU core pinning for deterministic performance

  • Real-time thread priorities

  • Category-based task routing

  • Task readiness checking with configurable tolerance to minimize scheduling jitter

Basic Scheduler Usage#

// Create a scheduler with 2 worker threads
auto scheduler = TaskScheduler::create().workers(2).build();

std::atomic<int> counter{0};

auto graph = TaskGraph::create("scheduled_graph")
                     .single_task("work")
                     .function([&counter]() { counter++; })
                     .build();

// Schedule the graph for execution
scheduler.schedule(graph);

// Wait for workers to complete
scheduler.join_workers();

The scheduler uses a builder pattern for configuration, making it easy to customize behavior before construction.

Worker Configuration#

Workers are the threads that execute tasks. Each worker can be configured with:

  • CPU core affinity (pinning to specific cores)

  • Thread priority (1-99, higher = more urgent)

  • Task categories (which types of tasks this worker handles)

Scheduler with Task Categories#

// Configure workers for specific task categories
std::vector<WorkerConfig> configs;

// Worker 0: High priority tasks
configs.push_back(
        WorkerConfig::create_for_categories({TaskCategory{BuiltinTaskCategory::HighPriority}}));

// Worker 1: Default tasks
configs.push_back(
        WorkerConfig::create_for_categories({TaskCategory{BuiltinTaskCategory::Default}}));

auto scheduler = TaskScheduler::create().workers(WorkersConfig{configs}).build();

Worker with Core Affinity#

const auto num_cores = std::thread::hardware_concurrency();
if (num_cores >= 4) {
    // Pin worker to specific CPU core
    std::vector<WorkerConfig> configs;
    configs.push_back(WorkerConfig::create_pinned(
            2, // Core ID
            {TaskCategory{BuiltinTaskCategory::Default}}));

    auto scheduler = TaskScheduler::create().workers(WorkersConfig{configs}).build();
}

Task Categories#

TaskCategory provides an extensible categorization system for organizing tasks. The framework includes built-in categories (Default, HighPriority, LowPriority, IO, Compute, Network, Message), and users can define custom categories.

Custom categories are defined using the DECLARE_TASK_CATEGORIES macro at namespace scope:

DECLARE_TASK_CATEGORIES(MyAppCategories, DataProcessing, NetworkIO, Rendering);

Once defined, custom categories can be used with tasks and task graphs:

std::atomic<int> counter{0};

// Use custom task category with TaskGraph
auto graph = TaskGraph::create("categorized_graph")
                     .single_task("process_data")
                     .category(TaskCategory{MyAppCategories::DataProcessing})
                     .function([&counter]() { counter++; })
                     .build();

Categories enable workload partitioning across worker threads, allowing fine-grained control over task execution resources.

Task Pool#

TaskPool provides efficient task object reuse through a lock-free pooling mechanism. Instead of allocating new tasks for every execution, the pool maintains a cache of reusable task objects.

// Create a task pool with initial capacity
auto pool = TaskPool::create(
        100, // Initial pool size
        8,   // Max task parents
        64,  // Max task name length
        32); // Max graph name length

// Acquire tasks from pool
auto task1 = pool->acquire_task("task1", "graph1");
auto task2 = pool->acquire_task("task2", "graph1");

// Check pool statistics
const auto stats = pool->get_stats();

TaskPool is automatically used by TaskGraph for efficient task reuse across multiple scheduling rounds. The pool tracks statistics including hit rate and reuse counts.

Task Monitor#

TaskMonitor provides monitoring of task execution with detailed performance tracking and visualization capabilities. The monitor operates in a separate background thread and uses lock-free queues for real-time safe communication with worker threads.

Key capabilities include:

  • Performance Statistics: Tracks execution duration, scheduling jitter, and task status

  • Timeout Detection: Automatically detects and can cancel tasks that exceed configured timeouts

  • Chrome Trace Output: Exports execution timelines for visualization in chrome://tracing

  • Graph-Level Analysis: Provides aggregated statistics grouped by task graph and scheduling round

The TaskMonitor is automatically integrated with TaskScheduler. The monitor thread can be pinned to a specific CPU core for deterministic overhead:

// Configure scheduler with monitor on dedicated core
auto scheduler = TaskScheduler::create().workers(4).monitor_core(0).build();

std::atomic<int> completed_count{0};
std::atomic<int> timeout_count{0};

TaskGraph graph("monitored_graph");

// Normal task
graph.register_task("fast_task").function([&completed_count]() { completed_count++; }).add();

// Task with timeout that will exceed and be cancelled by the task monitor
graph.register_task("timeout_task")
        .timeout(10ms)
        .function([&timeout_count](const TaskContext &ctx) -> TaskResult {
            std::this_thread::sleep_for(50ms);
            if (ctx.cancellation_token->is_cancelled()) {
                return TaskResult{TaskStatus::Cancelled, "Task timed out"};
            }
            timeout_count++;
            return TaskResult{TaskStatus::Completed};
        })
        .add();

graph.build();

// Execute tasks
scheduler.schedule(graph);
scheduler.join_workers();

// Print execution statistics
scheduler.print_monitor_stats();

// Export Chrome trace for visualization
const auto result = scheduler.write_chrome_trace_to_file("trace.json");

Triggers#

The task library provides two types of triggers for periodic and event-driven task execution.

Timed Trigger#

TimedTrigger executes a callback at regular intervals with nanosecond precision. It supports:

  • High-precision periodic execution

  • CPU core pinning for deterministic timing

  • Real-time thread priorities

  • Jump detection for timing anomalies

  • Detailed latency statistics

std::atomic<int> tick_count{0};

// Create a periodic trigger
auto trigger = TimedTrigger::create(
                       [&tick_count]() { tick_count++; },
                       10ms) // Trigger every 10ms
                       .max_triggers(5)
                       .build();

// Start the trigger
if (const auto err = trigger.start(); !err) {
    trigger.wait_for_completion();
}

TimedTrigger is commonly used with TaskScheduler to periodically schedule task graphs for execution, as demonstrated in the sample application.

Memory Trigger#

MemoryTrigger monitors a memory location and executes a callback when values change. It supports:

  • Atomic memory location monitoring

  • Custom comparators for trigger conditions

  • Condition variable or polling notification strategies

  • CPU core pinning and priority scheduling

auto memory = std::make_shared<std::atomic<int>>(0);
std::atomic<int> trigger_count{0};
std::atomic<int> value_delta{0};

// Monitor memory location for changes
auto trigger = MemoryTrigger<int>::create(
                       memory,
                       [&trigger_count, &value_delta](int old_val, int new_val) {
                           trigger_count++;
                           value_delta.store(new_val - old_val);
                       })
                       .with_notification_strategy(NotificationStrategy::Polling)
                       .with_polling_interval(1ms)
                       .build();

if (const auto err = trigger.start(); !err) {
    // Change the memory value
    memory->store(42);
    std::this_thread::sleep_for(10ms);
    trigger.stop();
}

MemoryTrigger is useful for event-driven architectures where tasks respond to state changes in shared memory.

Complete Example#

The task sample application demonstrates a complete workflow:

using namespace framework::task;
using namespace std::chrono_literals;

setup_logging();

const auto config = parse_arguments(argc, argv);
if (!config.has_value()) {
    // Empty error string means --help or --version was shown (success)
    // Non-empty error string means parse error (failure)
    if (!config.error().empty()) {
        RT_LOG_ERROR("{}", config.error());
        return EXIT_FAILURE;
    }
    return EXIT_SUCCESS;
}

// Create task scheduler with configured worker threads
auto scheduler = TaskScheduler::create().workers(config->workers).build();

// Define a sample task that increments a counter
std::atomic<int> task_counter{0};
auto graph = TaskGraph::create("demo_graph")
                     .single_task("sample_task")
                     .function([&task_counter]() {
                         task_counter++;
                         std::this_thread::sleep_for(100us);
                     })
                     .build();

// Schedule task at regular intervals
auto trigger_function = [&scheduler, &graph]() { scheduler.schedule(graph); };
const auto interval = std::chrono::milliseconds{config->interval_ms};
auto trigger = TimedTrigger::create(trigger_function, interval)
                       .max_triggers(config->count)
                       .build();

// Run and wait for completion
if (const auto start_result = trigger.start(); start_result) {
    RT_LOG_ERROR("Failed to start trigger: {}", start_result.message());
    return EXIT_FAILURE;
}
trigger.wait_for_completion();

// Cleanup and report
scheduler.join_workers();
RT_LOG_INFO("Complete: {} tasks executed", task_counter.load());
trigger.print_summary();
scheduler.print_monitor_stats();

return EXIT_SUCCESS;

This example creates a TaskScheduler, defines a simple task graph, and uses TimedTrigger to periodically schedule the graph for execution at configurable intervals.

Additional Examples#

For more examples, see:

  • framework/task/samples/task_sample.cpp - Complete sample application with scheduler, triggers, and task graphs

  • framework/task/tests/task_sample_tests.cpp - Documentation examples and unit tests

API Reference#

enum class framework::task::GrowthStrategy#

Growth strategy when FlatMap reaches capacity threshold

Values:

enumerator Allocate#

Allow underlying container to grow (current default behavior)

enumerator Evict#

Evict entries when threshold reached (prevents growth)

enumerator Throw#

Throw exception when threshold would be exceeded.

enum class framework::task::NotificationStrategy#

Notification strategy for memory monitoring.

Values:

enumerator ConditionVariable#

Use condition variable (requires explicit notify)

enumerator Polling#

Polling (when explicit notification not possible)

enum class framework::task::TaskStatus#

Task execution status.

Values:

enumerator NotStarted#

Task has not been started yet.

enumerator Running#

Task is currently executing.

enumerator Completed#

Task completed successfully.

enumerator Failed#

Task failed during execution.

enumerator Cancelled#

Task was cancelled before or during execution.

enum class framework::task::TaskErrc : std::uint8_t#

Task framework error codes compatible with std::error_code

This enum class provides type-safe error codes for task framework operations that integrate seamlessly with the standard C++ error handling framework.

Values:

enumerator Success#

Operation succeeded.

enumerator AlreadyRunning#

Operation failed: already running.

enumerator NotStarted#

Operation failed: not started.

enumerator QueueFull#

Operation failed: queue is full.

enumerator InvalidParameter#

Invalid parameter provided.

enumerator TaskNotFound#

Task not found in registry.

enumerator ThreadConfigFailed#

Thread configuration failed.

enumerator ThreadPinFailed#

Thread core pinning failed.

enumerator ThreadPriorityFailed#

Thread priority setting failed.

enumerator FileOpenFailed#

File open operation failed.

enumerator FileWriteFailed#

File write operation failed.

enumerator PthreadGetaffinityFailed#

pthread_getaffinity_np failed

enumerator PthreadSetaffinityFailed#

pthread_setaffinity_np failed

enumerator PthreadGetschedparamFailed#

pthread_getschedparam failed

enumerator PthreadSetschedparamFailed#

pthread_setschedparam failed

enum class framework::task::MonitorEventType#

Task monitor event types for tracking task lifecycle.

Values:

enumerator RegisterTask#

Task registration event.

enumerator RecordStart#

Task execution start.

enumerator RecordEnd#

Task execution end.

enumerator CancelTask#

Task cancellation request.

enum class framework::task::WorkerShutdownBehavior#

Shutdown behavior when joining workers

Values:

enumerator FinishPendingTasks#

Complete all pending tasks before shutdown (graceful)

enumerator CancelPendingTasks#

Cancel all pending tasks and shutdown immediately (forced)

enum class framework::task::WorkerStartupBehavior#

Worker startup behavior for constructor.

Values:

enumerator AutoStart#

Automatically start workers during construction.

enumerator Manual#

Require explicit start_workers() call.

enum class framework::task::TraceWriteMode#

File write mode for trace output functions.

Values:

enumerator Overwrite#

Overwrite existing file (default)

enumerator Append#

Append to existing file.

enum class framework::task::SkipStrategy#

Skip strategy for handling missed trigger windows.

Values:

enumerator CatchupAll#

Catch up all missed triggers (default)

enumerator SkipAhead#

Skip missed triggers, jump to next future interval.

enum class framework::task::TriggerEventType#

Trigger event types for statistics tracking.

Values:

enumerator TriggerStart#

Trigger tick started.

enumerator CallbackStart#

Callback execution started.

enumerator CallbackEnd#

Callback execution completed.

enumerator LatencyWarning#

Latency exceeded threshold.

enumerator CallbackDurationWarning#

Callback duration exceeded threshold.

enumerator JumpDetected#

Sudden timing jump detected.

enumerator TriggersSkipped#

Multiple triggers were skipped.

using framework::task::TaskFunction = std::variant<std::function<TaskResult()>, std::function<TaskResult(const TaskContext&)>>#

Type alias for task function variants to reduce verbosity.

typedef std::uint32_t framework::task::WorkerId#

Worker thread identifier type.

using framework::task::Nanos = std::chrono::nanoseconds#

Time type for nanosecond precision timing.

static constexpr std::size_t framework::task::DEFAULT_POOL_SIZE = 64#

Default initial pool size.

static constexpr std::size_t framework::task::DEFAULT_MAX_TASK_PARENTS = 8#

Default maximum expected parents per task.

static constexpr std::size_t framework::task::DEFAULT_MAX_TASK_NAME_LENGTH = 64#

Default maximum expected task name length.

static constexpr std::size_t framework::task::DEFAULT_MAX_GRAPH_NAME_LENGTH = 32#

Default maximum expected graph name length.

template<MemoryTriggerRequirements T, MemoryTriggerCallback<T> CallbackType>
auto framework::task::make_memory_trigger(
std::shared_ptr<std::atomic<T>> memory_ptr,
CallbackType &&callback,
)#

Create MemoryTrigger with automatic type deduction

Parameters:
  • memory_ptr[in] Shared pointer to atomic memory location

  • callback[in] Function to call when trigger condition is met

Returns:

Builder for configuring the MemoryTrigger

framework::task::DECLARE_TASK_CATEGORIES(
BuiltinTaskCategory,
Default,
HighPriority,
LowPriority,
IO,
Compute,
Network,
Message,
)#

Built-in task categories for common use cases

inline const TaskErrorCategory &framework::task::task_category(
) noexcept#

Get the singleton instance of the task error category

Returns:

Reference to the task error category

inline std::error_code framework::task::make_error_code(
const TaskErrc errc,
) noexcept#

Create an error_code from a TaskErrc value

Parameters:

errc[in] The task error code

Returns:

A std::error_code representing the task error

constexpr bool framework::task::is_success(
const TaskErrc errc,
) noexcept#

Check if a TaskErrc represents success

Parameters:

errc[in] The error code to check

Returns:

true if the error code represents success, false otherwise

inline bool framework::task::is_task_success(
const std::error_code &errc,
) noexcept#

Check if an error_code represents task success

An error code represents success if it evaluates to false (i.e., value is 0). This works regardless of the category (system, task, etc.).

Parameters:

errc[in] The error code to check

Returns:

true if the error code represents success, false otherwise

inline const char *framework::task::get_error_name(
const TaskErrc errc,
) noexcept#

Get the name of a TaskErrc enum value

Parameters:

errc[in] The error code

Returns:

The enum name as a string

inline const char *framework::task::get_error_name(
const std::error_code &ec,
) noexcept#

Get the name of a TaskErrc from a std::error_code

Parameters:

ec[in] The error code

Returns:

The enum name as a string, or “unknown” if not a task error

framework::task::DECLARE_LOG_COMPONENT(
TaskLog,
Task,
FlatMap,
TaskNvtx,
TaskViz,
TaskMonitor,
TaskGraph,
TaskScheduler,
TaskPool,
TaskTrigger,
)#

Declare logging components for task subsystem.

std::set<std::uint32_t> framework::task::parse_core_list(
const std::string_view param_value,
)#

Parse core list from kernel parameter value Supports ranges (4-64) and individual cores (1,2,3)

Parameters:

param_value[in] Parameter value string

Returns:

Set of core IDs

bool framework::task::validate_rt_core_config(
const std::string_view cmdline,
const std::vector<std::uint32_t> &cores,
)#

Validate that cores are properly configured for RT workloads Checks if cores are in isolcpus, nohz_full, and rcu_nocbs lists

Parameters:
  • cmdline[in] Kernel command line string

  • cores[in] Vector of core IDs to validate

Returns:

true if all cores are properly configured for RT, false otherwise

double framework::task::calculate_standard_deviation(
const std::vector<std::int64_t> &values,
double mean,
)#

Calculate standard deviation for a collection of int64_t values

Parameters:
  • values[in] Vector of int64_t values to analyze

  • mean[in] Pre-calculated mean of the values

Returns:

Standard deviation

double framework::task::calculate_standard_deviation(
const std::vector<double> &values,
double mean,
)#

Calculate standard deviation for a collection of double values

Parameters:
  • values[in] Vector of double values to analyze

  • mean[in] Pre-calculated mean of the values

Returns:

Standard deviation

double framework::task::calculate_percentile(
const std::vector<double> &sorted_values,
double percentile,
)#

Calculate percentile from sorted vector

Parameters:
  • sorted_values[in] Sorted values vector

  • percentile[in] Percentile to calculate (0.0 to 1.0)

Returns:

Percentile value

double framework::task::nanos_to_micros_int64(
std::int64_t nanos_count,
)#

Convert nanoseconds to microseconds (int64_t version)

Parameters:

nanos_count[in] Nanoseconds value as int64_t

Returns:

Microseconds as double

double framework::task::nanos_to_micros_double(double nanos_count)#

Convert nanoseconds to microseconds (double version)

Parameters:

nanos_count[in] Nanoseconds value as double

Returns:

Microseconds as double

TimingStatistics framework::task::calculate_timing_statistics(
const std::vector<std::int64_t> &values_ns,
)#

Calculate comprehensive timing statistics from int64_t nanosecond values

Parameters:

values_ns[in] Vector of timing values in nanoseconds (int64_t)

Returns:

TimingStatistics structure with all calculated metrics in microseconds

TimingStatistics framework::task::calculate_timing_statistics(
const std::vector<double> &values_ns,
)#

Calculate comprehensive timing statistics from double nanosecond values

Parameters:

values_ns[in] Vector of timing values in nanoseconds (double)

Returns:

TimingStatistics structure with all calculated metrics in microseconds

template<typename RecordType>
constexpr std::size_t framework::task::calculate_max_records_for_bytes(
const std::size_t max_bytes,
) noexcept#

Calculate maximum number of records for a given memory budget

Template Parameters:

RecordType – The type of record to calculate for

Parameters:

max_bytes[in] Maximum bytes to allocate for records

Returns:

Maximum number of records that fit in the given byte limit

std::chrono::nanoseconds framework::task::calculate_tai_offset()#

Calculate current TAI offset Retrieves TAI (International Atomic Time) offset from system clock using adjtimex()

Returns:

TAI offset as std::chrono::nanoseconds, or 0 if retrieval fails

std::uint64_t framework::task::calculate_start_time_for_next_period(
const StartTimeParams &params,
std::chrono::nanoseconds tai_offset,
)#

Calculate start time for next period boundary Computes the next aligned time boundary based on GPS parameters and period

Parameters:
  • params[in] Parameters for start time calculation

  • tai_offset[in] TAI offset as std::chrono::nanoseconds

Returns:

Next aligned start time in nanoseconds since epoch

std::uint64_t framework::task::calculate_start_time_for_next_period(
const StartTimeParams &params,
)#

Calculate start time for next period boundary using current system TAI offset Convenience wrapper that retrieves current TAI offset and calculates start time

Parameters:

params[in] Parameters for start time calculation

Returns:

Next aligned start time in nanoseconds since epoch

std::error_code framework::task::pin_current_thread_to_core(
std::uint32_t core_id,
)#

Pin current thread to specified CPU core

Parameters:

core_id[in] CPU core ID to pin to

Returns:

Error code indicating success or failure

std::error_code framework::task::pin_thread_to_core(
pthread_t thread_handle,
std::uint32_t core_id,
)#

Pin thread to specified CPU core using thread handle

Parameters:
  • thread_handle[in] Native pthread handle

  • core_id[in] CPU core ID to pin to

Returns:

Error code indicating success or failure

std::error_code framework::task::set_current_thread_priority(
std::uint32_t priority,
)#

Set real-time priority for current thread

Parameters:

priority[in] Real-time priority (1-99)

Returns:

Error code indicating success or failure

std::error_code framework::task::set_thread_priority(
pthread_t thread_handle,
std::uint32_t priority,
)#

Set real-time priority for thread using thread handle

Parameters:
  • thread_handle[in] Native pthread handle

  • priority[in] Real-time priority (1-99)

Returns:

Error code indicating success or failure

std::error_code framework::task::configure_current_thread(
ThreadConfig config,
)#

Configure thread with optional core pinning and priority Applies both pinning and priority settings for current thread

Parameters:

config[in] Thread configuration parameters

Returns:

Error code indicating success or failure

std::error_code framework::task::configure_thread(
std::thread &thread,
ThreadConfig config,
)#

Configure thread with optional core pinning and priority using std::thread Applies both pinning and priority settings for specified thread

Parameters:
  • thread[in] std::thread reference

  • config[in] Thread configuration parameters

Returns:

Error code indicating success or failure

void framework::task::enable_sanitizer_compatibility()#

Enable sanitizer compatibility for processes with elevated capabilities

When a process has CAP_SYS_NICE (for real-time scheduling), it becomes non-dumpable by default for security. This prevents LeakSanitizer and other debugging tools from working. This function makes the process dumpable again when sanitizers are enabled.

template<typename T>
class BoundedQueue#
#include <bounded_queue.hpp>

Multi-Producer Multi-Consumer bounded queue based on Vyukov’s algorithm

Public Functions

inline explicit BoundedQueue(const std::size_t buffer_size)#

Constructor

Parameters:

buffer_size – Queue capacity (must be power of 2)

inline bool enqueue(const T &data) noexcept#

Enqueue item (multiple producers) - copy version

Parameters:

data – Item to enqueue

Returns:

true if successful, false if queue full

inline bool enqueue(T &&data) noexcept#

Enqueue item (multiple producers) - move version

Parameters:

data – Item to enqueue (will be moved)

Returns:

true if successful, false if queue full

inline bool dequeue(T &data) noexcept#

Dequeue item (multiple consumers)

Parameters:

data – Reference to store dequeued item

Returns:

true if successful, false if queue empty

inline bool try_push(const T &data) noexcept#

Try to enqueue with optional return - copy version

Parameters:

data – Item to enqueue

Returns:

true if successful, false if queue full

inline bool try_push(T &&data) noexcept#

Try to enqueue with optional return - move version

Parameters:

data – Item to enqueue (will be moved)

Returns:

true if successful, false if queue full

inline bool try_pop(T &data) noexcept#

Try to dequeue with optional return

Parameters:

data – Reference to store dequeued item

Returns:

true if successful, false if queue empty

inline std::optional<T> try_pop() noexcept#

Try to dequeue with std::optional return

Returns:

Optional containing item if successful, nullopt if empty

inline std::size_t capacity() const noexcept#

Get buffer capacity

Returns:

Maximum number of items the queue can hold

class CancellationToken#
#include <task.hpp>

Cancellation token for cooperative task cancellation

Allows tasks to check if they should stop execution and provides a mechanism for external cancellation requests.

Public Functions

CancellationToken() = default#

Default constructor.

~CancellationToken() = default#

Default destructor.

CancellationToken(const CancellationToken&) = delete#

Non-copyable and non-movable.

CancellationToken &operator=(const CancellationToken&) = delete#
CancellationToken(CancellationToken&&) = delete#
CancellationToken &operator=(CancellationToken&&) = delete#
inline bool is_cancelled() const noexcept#

Check if cancellation has been requested

Returns:

true if task should be cancelled, false otherwise

inline void cancel() noexcept#

Request cancellation of the task.

inline void reset() noexcept#

Reset cancellation state (mark as not cancelled)

struct CategoryQueue#
#include <task_scheduler.hpp>

Category queue containing priority queue and associated lock Encapsulates a priority queue for a specific task category with thread-safe access

Public Functions

inline void reserve(const std::size_t capacity)#

Reserve capacity in the underlying queue container

Preserves existing elements while ensuring the underlying container has at least the specified capacity. Existing elements will be maintained in their proper priority order.

Parameters:

capacity[in] Number of elements to reserve space for

Public Members

std::priority_queue<TaskHandle, std::vector<TaskHandle>, TaskTimeComparator> queue#

Priority queue for task handles.

Spinlock lock = {}#

Spinlock for thread-safe access.

struct TaskTimeComparator#
#include <task_scheduler.hpp>

Task comparison for priority queue (dependency generation within same graph first, then scheduled time)

Public Functions

inline bool operator()(
const TaskHandle &a,
const TaskHandle &b,
) const noexcept#

Compare TaskHandles by dependency generation (within same graph), then scheduled time

Parameters:
  • a[in] First task handle to compare

  • b[in] Second task handle to compare

Returns:

True if a should be scheduled after b

struct CoreAssignment#
#include <task_worker.hpp>

Core assignment configuration for explicit worker setup

Public Functions

inline explicit CoreAssignment(std::uint32_t core)#

Create core assignment with normal scheduling

Parameters:

core[in] CPU core ID to assign

inline CoreAssignment(std::uint32_t core, std::uint32_t priority)#

Create core assignment with thread priority

Parameters:
  • core[in] CPU core ID to assign

  • priority[in] Thread priority level (1-99)

Public Members

std::uint32_t core_id#

CPU core ID.

std::optional<std::uint32_t> thread_priority#

Thread priority (1-99, higher = more priority)

struct Edge#
#include <task_visualizer.hpp>

Edge information structure Represents a dependency relationship between tasks

Public Members

std::string from#

Source task name.

std::string to#

Destination task name.

std::string label#

Optional edge label.

template<std::size_t N>
class FixedString#
#include <function_cache.hpp>

Fixed-size string to avoid heap allocations

Uses std::array for storage with automatic null-termination.

Public Functions

inline explicit FixedString()#

Default constructor - creates empty string.

inline explicit FixedString(std::string_view str)#

Constructor from string view

Parameters:

str[in] Source string to copy

inline explicit FixedString(const char *str)#

Constructor from C string (for nullptr compatibility)

Parameters:

str[in] Source string to copy (nullptr creates empty string)

inline FixedString &operator=(std::string_view str)#

Assignment from string view

Parameters:

str[in] Source string to copy

Returns:

Reference to this object

inline FixedString &operator=(const char *str)#

Assignment from C string (for nullptr compatibility)

Parameters:

str[in] Source string to copy (nullptr creates empty string)

Returns:

Reference to this object

inline const char *c_str() const noexcept#

Get null-terminated string (const version)

Returns:

Pointer to null-terminated C-string

inline char *c_str() noexcept#

Get null-terminated string (non-const version)

Returns:

Pointer to null-terminated C-string

inline constexpr std::size_t capacity() const noexcept#

Get string capacity (maximum size)

Returns:

Maximum number of characters that can be stored

inline std::size_t size() const noexcept#

Get current string length

Returns:

Number of characters in the string (excluding null terminator)

inline bool empty() const noexcept#

Check if string is empty

Returns:

True if string is empty

inline std::array<char, N> &data() noexcept#

Get underlying array for direct access (non-const version)

Returns:

Reference to underlying character array

inline const std::array<char, N> &data() const noexcept#

Get underlying array for direct access (const version)

Returns:

Const reference to underlying character array

inline std::strong_ordering operator<=>(
const FixedString &other,
) const noexcept#

Three-way comparison with another FixedString

Parameters:

other[in] String to compare with

Returns:

Comparison result (strong ordering)

inline bool operator==(const FixedString &other) const noexcept#

Equality comparison with another FixedString

Parameters:

other[in] String to compare with

Returns:

True if strings are equal

inline std::strong_ordering operator<=>(
std::string_view str,
) const noexcept#

Three-way comparison with string view

Parameters:

str[in] String to compare with

Returns:

Comparison result (strong ordering)

inline bool operator==(std::string_view str) const noexcept#

Equality comparison with string view

Parameters:

str[in] String to compare with

Returns:

True if strings are equal

inline bool operator==(const char *str) const noexcept#

Equality comparison with C string (for nullptr compatibility)

Parameters:

str[in] String to compare with (nullptr treated as empty string)

Returns:

True if strings are equal

template<typename Key, typename Value>
class FlatMap#
#include <flat_map.hpp>

High-performance flat hash map with configurable growth strategies

Uses phmap::flat_hash_map internally with configurable behavior when the container reaches a threshold: Allocate (allow growth), Evict (remove entries), or Throw (exception on overflow).

Public Functions

inline explicit FlatMap(
const std::size_t max_size = DEFAULT_MAX_SIZE,
const GrowthStrategy growth_strategy = GrowthStrategy::Allocate,
)#

Constructor

Parameters:
  • max_size – Maximum number of entries for capacity calculations (must be > 0)

  • growth_strategy – Strategy to use when capacity limits are reached:

    • Allocate: Allow container to grow (default)

    • Evict: Remove entries when threshold reached (use set_eviction_percentages())

    • Throw: Throw exception when max_size would be exceeded

inline void evict_percentage(std::size_t percentage)#

Evict a percentage of entries from the container

Parameters:

percentage – Percentage (1-100) of entries to remove

inline Value &operator[](const Key &key)#

Access element with automatic insertion if key doesn’t exist

Parameters:

key[in] Key to access

Returns:

Reference to the value associated with key

inline std::pair<typename phmap::flat_hash_map<Key, Value>::iterator, bool> insert(
const std::pair<Key, Value> &value,
)#

Insert key-value pair

Parameters:

value[in] Key-value pair to insert

Returns:

Iterator to inserted element and success flag

template<typename ...Args>
inline std::pair<typename phmap::flat_hash_map<Key, Value>::iterator, bool> emplace(
Args&&... args,
)#

Emplace element with in-place construction

Parameters:

args[in] Arguments for constructing the element

Returns:

Iterator to inserted element and success flag

inline auto find(const Key &key) const -> decltype(map_.find(key))#

Find element by key (const version)

Parameters:

key[in] Key to find

Returns:

Iterator to element if found

inline auto find(const Key &key) -> decltype(map_.find(key))#

Find element by key (non-const version)

Parameters:

key[in] Key to find

Returns:

Iterator to element if found

inline auto begin() const -> decltype(map_.begin())#

Get const iterator to beginning

Returns:

Const iterator to first element

inline auto end() const -> decltype(map_.end())#

Get const iterator to end

Returns:

Const iterator past last element

inline auto begin() -> decltype(map_.begin())#

Get iterator to beginning

Returns:

Iterator to first element

inline auto end() -> decltype(map_.end())#

Get iterator to end

Returns:

Iterator past last element

inline std::size_t size() const#

Get number of elements

Returns:

Number of elements in map

inline bool empty() const#

Check if map is empty

Returns:

True if map is empty

inline std::size_t capacity() const#

Get current capacity

Returns:

Current capacity of underlying container

inline std::size_t max_size() const#

Get maximum allowed size

Returns:

Maximum number of elements allowed

inline GrowthStrategy growth_strategy() const#

Get current growth strategy

Returns:

Current growth strategy

inline void erase(const Key &key)#

Remove element by key

Parameters:

key[in] Key to remove

inline void clear()#

Clear all elements.

inline const phmap::flat_hash_map<Key, Value> &underlying() const#

Get const reference to underlying map

Returns:

Const reference to underlying phmap::flat_hash_map

inline phmap::flat_hash_map<Key, Value> &underlying()#

Get reference to underlying map

Returns:

Reference to underlying phmap::flat_hash_map

inline void set_max_size(std::size_t new_max_size)#

Set maximum size

Parameters:

new_max_size[in] Maximum number of elements

inline void set_eviction_percentages(
std::size_t full_percentage,
std::size_t evict_percentage,
)#

Set both eviction percentages for the Evict strategy

Parameters:
  • full_percentage[in] Percentage full (1-100) at which eviction triggers

  • evict_percentage[in] Percentage (1-100) of entries to evict when threshold reached

Throws:

std::invalid_argument – if percentages are invalid

inline void set_growth_strategy(GrowthStrategy strategy)#

Set growth strategy

Parameters:

strategy[in] Growth strategy to use when capacity limits are reached

Throws:

std::invalid_argument – if changing to Evict strategy with currently invalid parameters

class FunctionCache#
#include <function_cache.hpp>

High-performance function name cache with automatic eviction

Caches function names by address using fixed-size strings to avoid allocations. Supports automatic C++ name demangling and provides cache statistics.

Public Functions

explicit FunctionCache(
std::size_t max_size = DEFAULT_MAX_SIZE,
std::size_t full_percentage = DEFAULT_FULL_PERCENTAGE,
std::size_t evict_percentage = DEFAULT_EVICT_PERCENTAGE,
)#

Constructor

Parameters:
  • max_size[in] Maximum number of cached entries

  • full_percentage[in] Percentage full at which eviction triggers

  • evict_percentage[in] Percentage of entries to evict

const char *get(void *addr)#

Get a function name from the cache

Parameters:

addr[in] Function address

Returns:

Cached function name or nullptr if not found

void add_with_demangling(void *addr)#

Add a function name to the cache with automatic demangling

Parameters:

addr[in] Function address

void add(void *addr, std::string_view name_str)#

Add a function name to the cache

Parameters:
  • addr[in] Function address

  • name_str[in] Function name string

void add(void *addr, const char *name_str)#

Add a function name to the cache (C string overload for nullptr compatibility)

Parameters:
  • addr[in] Function address

  • name_str[in] Function name string (nullptr creates empty string)

void clear()#

Clear all cached entries

std::size_t size() const#

Get number of cached entries

Returns:

Number of cached function entries

std::uint64_t get_cache_hits() const#

Get cache hit count

Returns:

Number of cache hits

std::uint64_t get_cache_misses() const#

Get cache miss count

Returns:

Number of cache misses

std::uint64_t get_cache_attempts() const#

Get total cache access attempts

Returns:

Total number of cache access attempts

void evict_percentage(std::size_t percentage)#

Manual eviction control

Parameters:

percentage[in] Percentage of entries to evict

double get_hit_ratio() const#

Get cache hit ratio as percentage

Returns:

Cache hit ratio as percentage (0.0 to 100.0)

template<MemoryTriggerRequirements T>
class MemoryTrigger#
#include <memory_trigger.hpp>

Generic memory-based trigger for monitoring CPU memory locations

Monitors a CPU memory location and executes callback when trigger condition is met. Supports any atomic-compatible type. Default comparator prevents double-triggering by only firing on value changes (old != new).

Custom Comparators:

Include transition logic to prevent repeated triggers:

// Safe: Triggers on transition to READY
.with_comparator([](Status old, Status new_val) {
    return old != new_val && new_val == READY;
})
// Unsafe: Triggers repeatedly when value == READY
.with_comparator([](Status old, Status new_val) {
    return new_val == READY;  // Missing old != new_val check!
})

Template Parameters:

T – Type of memory location to monitor (must satisfy MemoryTriggerRequirements)

Public Types

using CallbackType = std::function<void(T old_value, T new_value)>#

Callback function type for memory change notifications.

using ComparatorType = std::function<bool(T old_value, T new_value)>#

Comparator function type for determining when to trigger

using MemoryPtr = std::shared_ptr<std::atomic<T>>#

Shared pointer to atomic memory location

Public Functions

~MemoryTrigger()#

Destructor ensures clean shutdown.

MemoryTrigger(const MemoryTrigger&) = delete#
MemoryTrigger &operator=(const MemoryTrigger&) = delete#
MemoryTrigger(MemoryTrigger&&) = delete#
MemoryTrigger &operator=(MemoryTrigger&&) = delete#
std::error_code start()#

Start monitoring

Returns:

Error code indicating success or failure

void stop()#

Stop monitoring.

bool is_running() const noexcept#

Check if currently running

Returns:

true if monitoring is active, false otherwise

void notify() noexcept#

Notify trigger of memory change (for ConditionVariable mode only)

Public Static Functions

template<MemoryTriggerCallback<T> CallbackT>
static inline Builder create(
MemoryPtr memory_ptr,
CallbackT &&callback,
)#

Create builder for memory trigger

Parameters:
  • memory_ptr[in] Shared pointer to atomic memory location to monitor

  • callback[in] Function to execute when triggered (must satisfy MemoryTriggerCallback concept)

Returns:

Builder instance for configuring the trigger

class Builder#
#include <memory_trigger.hpp>

Builder pattern for configuring MemoryTrigger

Public Functions

template<MemoryTriggerCallback<T> CallbackT>
inline Builder(
MemoryPtr memory_ptr,
CallbackT &&callback,
)#

Create builder with required parameters

Parameters:
  • memory_ptr[in] Shared pointer to atomic memory location to monitor

  • callback[in] Function to execute when triggered (must satisfy MemoryTriggerCallback concept)

inline Builder &with_comparator(ComparatorType comparator) noexcept#

Set custom comparator to determine when to trigger

Include transition detection (old != new) to prevent double-triggering.

Parameters:

comparator[in] Function that returns true if trigger should fire

Returns:

Reference to this builder for chaining

inline Builder &with_notification_strategy(
const NotificationStrategy strategy,
) noexcept#

Set notification strategy (default: ConditionVariable)

Parameters:

strategy[in] Notification strategy to use

Returns:

Reference to this builder for chaining

template<typename Rep, typename Period>
inline Builder &with_polling_interval(
std::chrono::duration<Rep, Period> interval,
) noexcept#

Set polling interval for polling mode (default: 100μs)

Parameters:

interval[in] Polling interval (any std::chrono::duration type)

Returns:

Reference to this builder for chaining

inline Builder &pin_to_core(const std::uint32_t core)#

Pin monitoring thread to specific CPU core

Parameters:

core[in] CPU core ID to pin to

Throws:

std::invalid_argument – if core >= hardware_concurrency

Returns:

Reference to this builder for chaining

inline Builder &with_priority(const std::uint32_t priority) noexcept#

Set thread priority (1-99, higher = more priority)

Parameters:

priority[in] Thread priority level

Returns:

Reference to this builder for chaining

MemoryTrigger build()#

Build the MemoryTrigger

Returns:

Configured MemoryTrigger instance

struct MonitorEvent#
#include <task_monitor.hpp>

Simplified monitor event for lock-free communication Uses TaskHandle for registration, task_id for runtime events

Public Members

MonitorEventType type = {MonitorEventType::RegisterTask}#

Event type.

Nanos timestamp = {}#

Event timestamp.

std::uint64_t task_id = {}#

Task ID (used for start/end/cancel events)

std::optional<TaskHandle> task_handle#

Handle to task (used for RegisterTask events only)

WorkerId worker = {}#

Worker ID (for start/end events)

TaskStatus status = {TaskStatus::Completed}#

Status (for end events)

struct NodeInfo#
#include <task_visualizer.hpp>

Node information structure Contains metadata about a task node in the graph

Public Members

std::string name#

Task name.

TaskCategory category = {BuiltinTaskCategory::Default}#

Task category for coloring.

bool is_completed = {}#

Whether task has completed.

bool has_failed = {}#

Whether task failed.

std::string tooltip#

Optional tooltip text.

class Nvtx#
#include <nvtx.hpp>

Singleton NVTX profiling manager

Automatically detects if nsys or ncu profiling is active by examining the parent process chain on first access and configures NVTX accordingly. Thread-safe singleton implementation.

Public Functions

Nvtx(const Nvtx&) = delete#
Nvtx &operator=(const Nvtx&) = delete#
Nvtx(Nvtx&&) = delete#
Nvtx &operator=(Nvtx&&) = delete#

Public Static Functions

static bool is_enabled()#

Check if NVTX profiling is currently enabled

Returns:

True if NVTX profiling is enabled

static Stats get_stats()#

Get NVTX profiling statistics

Returns:

Statistics structure with current values

static const char *get_function_name(void *func)#

Get function name from cache or resolve it (for C function access)

Parameters:

func[in] Function pointer

Returns:

Function name or nullptr if not found

static void increment_counters(bool resolved)#

Increment function call counters (for C function access)

Parameters:

resolved[in] True if function was resolved from symbols

struct Stats#
#include <nvtx.hpp>

NVTX profiling statistics

Public Members

std::uint64_t total_functions = {}#

Total function calls.

std::uint64_t resolved_functions = {}#

Successfully resolved function names.

std::uint64_t fallback_functions = {}#

Functions that used fallback formatting.

std::size_t cache_entries = {}#

Number of entries in function cache.

std::uint64_t cache_attempts = {}#

Total cache lookup attempts.

std::uint64_t cache_hits = {}#

Cache hits.

std::uint64_t cache_misses = {}#

Cache misses.

double hit_ratio = {}#

Cache hit ratio percentage.

class NvtxScopedRange#
#include <nvtx.hpp>

RAII wrapper for NVTX profiling ranges

Creates a scoped profiling range that automatically ends when destroyed. Used for performance profiling with NVIDIA Nsight tools.

Public Functions

explicit NvtxScopedRange(const char *name)#

Constructor - starts profiling range

Parameters:

name[in] Range name for profiler display

~NvtxScopedRange()#

Destructor - ends profiling range.

NvtxScopedRange(const NvtxScopedRange&) = delete#
NvtxScopedRange &operator=(const NvtxScopedRange&) = delete#
NvtxScopedRange(NvtxScopedRange&&) = delete#
NvtxScopedRange &operator=(NvtxScopedRange&&) = delete#
struct SchedulableTask#
#include <task_graph.hpp>

Schedulable task specification Contains all information needed to create and schedule a task

Public Members

std::string task_name#

Task name.

TaskFunction func#

Function to execute (supports both signatures)

TaskCategory category = {TaskCategory{BuiltinTaskCategory::Default}}#

Task category.

Nanos timeout = {0}#

Timeout in nanoseconds.

std::any user_data#

User-defined data for task context. For large objects, use std::shared_ptr<T> to avoid copies

std::vector<std::string> dependency_names#

Names of parent tasks.

std::vector<std::size_t> parent_indices#

Pre-computed parent indices for efficient building.

bool disabled = {false}#

Whether this task is disabled from scheduling.

class SingleTaskGraphBuilder#
#include <task_graph.hpp>

Builder for single-task graphs that builds immediately Provides a fluent interface for creating simple graphs with one task Reuses existing TaskGraphBuilder to avoid code duplication

Public Functions

explicit SingleTaskGraphBuilder(const std::string_view graph_name)#

Constructor

Parameters:

graph_name[in] Name for the graph

SingleTaskGraphBuilder &single_task(
const std::string_view task_name,
)#

Set task name

Parameters:

task_name[in] Name for the task

Returns:

Reference to this builder for chaining

template<typename Rep, typename Period>
inline SingleTaskGraphBuilder &timeout(
std::chrono::duration<Rep, Period> timeout_duration,
)#

Set task execution timeout

Template Parameters:
  • Rep – Arithmetic type representing the number of ticks

  • Period – std::ratio representing the tick period

Parameters:

timeout_duration[in] Maximum execution duration (any std::chrono::duration type)

Returns:

Reference to this builder for chaining

inline SingleTaskGraphBuilder &category(TaskCategory cat)#

Set task category

Parameters:

cat[in] Task category for worker assignment

Returns:

Reference to this builder for chaining

template<typename Func>
inline SingleTaskGraphBuilder &function(
Func &&func,
)#

Set task function (supports reference capture)

Parameters:

func[in] Function to execute - supports lambda with reference capture

Returns:

Reference to this builder for chaining

template<typename T>
inline SingleTaskGraphBuilder &user_data(
T &&data,
)#

Set user data for task context

Parameters:

data[in] User-defined data to pass to contextual functions

Returns:

Reference to this builder for chaining

TaskGraph build()#

Build complete task graph with single task

Returns:

Built TaskGraph ready for scheduling

class Spinlock#
#include <spinlock.hpp>

High-performance cross-platform spinlock

Uses atomic compare_exchange operations with architecture-appropriate memory ordering and CPU pause instructions for optimal performance on both x86 (strong memory model) and ARM (weak memory model).

Public Functions

inline void lock() noexcept#

Acquire lock (blocking) Spins until lock is acquired using architecture-optimized pause

inline bool try_lock() noexcept#

Try to acquire lock (non-blocking)

Returns:

true if lock acquired, false if already locked

inline void unlock() noexcept#

Release lock Uses release memory ordering to ensure proper synchronization

inline bool is_locked() const noexcept#

Check if lock is currently held (non-blocking read)

Note

This is a hint only - lock state may change immediately after check

Returns:

true if lock is held, false if available

class SpinlockGuard#
#include <spinlock.hpp>

RAII lock guard for Spinlock Automatically acquires lock on construction and releases on destruction

Public Functions

inline explicit SpinlockGuard(Spinlock &lock)#

Constructor - acquires the lock

Parameters:

lockSpinlock to acquire

inline ~SpinlockGuard()#

Destructor - releases the lock.

SpinlockGuard(const SpinlockGuard&) = delete#
SpinlockGuard &operator=(const SpinlockGuard&) = delete#
SpinlockGuard(SpinlockGuard&&) = delete#
SpinlockGuard &operator=(SpinlockGuard&&) = delete#
class SpinlockTryGuard#
#include <spinlock.hpp>

RAII try-lock guard for Spinlock Attempts to acquire lock on construction, provides success status

Public Functions

inline explicit SpinlockTryGuard(Spinlock &lock)#

Constructor - attempts to acquire the lock

Parameters:

lockSpinlock to try to acquire

inline ~SpinlockTryGuard()#

Destructor - releases the lock if acquired.

inline bool owns_lock() const noexcept#

Check if lock was successfully acquired

Returns:

true if lock is held, false if acquisition failed

inline explicit operator bool() const noexcept#

Explicit conversion to bool for convenient checking

Returns:

true if lock is held, false if acquisition failed

SpinlockTryGuard(const SpinlockTryGuard&) = delete#
SpinlockTryGuard &operator=(const SpinlockTryGuard&) = delete#
SpinlockTryGuard(SpinlockTryGuard&&) = delete#
SpinlockTryGuard &operator=(SpinlockTryGuard&&) = delete#
struct StartTimeParams#
#include <task_utils.hpp>

Parameters for calculating start time for next period boundary

Public Members

std::uint64_t current_time_ns = {}#

Current time in nanoseconds since epoch.

std::uint64_t period_ns = {}#

Period in nanoseconds for alignment.

std::int64_t gps_alpha = {}#

GPS alpha parameter for frequency adjustment (default 0)

std::int64_t gps_beta = {}#

GPS beta parameter for phase adjustment (default 0)

class Task#
#include <task.hpp>

Task class for representing units of work

A task encapsulates a function to execute along with metadata such as scheduling time, timeout, and category. Tasks can be executed independently or as part of dependency chains.

Task stores owned copies of task and graph names for safe access across different object lifetimes.

Public Functions

bool is_ready(
const Nanos now,
const Nanos readiness_tolerance_ns,
) const noexcept#

Check if task is ready to execute Task is ready when: now >= scheduled_time - readiness_tolerance_ns This tolerance window accounts for scheduling jitter and ensures tasks can be queued for execution slightly before their exact scheduled time, preventing delays due to timing precision limitations.

Parameters:
  • now[in] Current time in nanoseconds

  • readiness_tolerance_ns[in] Tolerance window for readiness checking - allows task to be considered ready this amount of time before its scheduled time

Returns:

true if task is ready to execute, false otherwise

template<typename NowRep, typename NowPeriod, typename ToleranceRep, typename TolerancePeriod>
inline bool is_ready(
std::chrono::duration<NowRep, NowPeriod> now_duration,
std::chrono::duration<ToleranceRep, TolerancePeriod> readiness_tolerance_duration,
) const noexcept#

Check if task is ready to execute (template version) Task is ready when: now >= scheduled_time - readiness_tolerance This tolerance window accounts for scheduling jitter and ensures tasks can be queued for execution slightly before their exact scheduled time, preventing delays due to timing precision limitations.

Template Parameters:
  • NowRep – Arithmetic type representing the number of ticks for now

  • NowPeriod – std::ratio representing the tick period for now

  • ToleranceRep – Arithmetic type representing the number of ticks for tolerance

  • TolerancePeriod – std::ratio representing the tick period for tolerance

Parameters:
  • now_duration[in] Current time (any std::chrono::duration type)

  • readiness_tolerance_duration[in] Tolerance window for readiness checking - allows task to be considered ready this amount of time before its scheduled time

Returns:

true if task is ready to execute, false otherwise

TaskResult execute() const#

Execute the task Runs the task function and updates status and result

Returns:

TaskResult indicating success or failure

void cancel() const noexcept#

Cancel the task Sets cancellation token to request task termination

Nanos get_scheduled_time() const noexcept#

Get scheduled execution time

Returns:

Scheduled time in nanoseconds

Nanos get_timeout_ns() const noexcept#

Get task timeout

Returns:

Timeout in nanoseconds

std::string_view get_task_name() const noexcept#

Get task name

Returns:

Task name reference

std::string_view get_graph_name() const noexcept#

Get graph name this task belongs to

Returns:

Graph name reference

std::uint64_t get_task_id() const noexcept#

Get task ID (guaranteed unique across all tasks)

Returns:

Task ID (64-bit unique identifier assigned at construction)

TaskCategory get_category() const noexcept#

Get task category

Returns:

Task category

std::uint32_t get_dependency_generation() const noexcept#

Get task dependency generation level

Returns:

Dependency generation level (0=root, 1=child, 2=grandchild, etc.)

std::uint64_t get_times_scheduled() const noexcept#

Get task’s graph scheduling count

Returns:

Number of times this task’s graph has been scheduled

void set_times_scheduled(
std::uint64_t times_scheduled,
) const noexcept#

Set task’s graph scheduling count

Parameters:

times_scheduled[in] Graph scheduling count to assign

TaskStatus status() const noexcept#

Get task status Checks cancellation token first, then result status

Returns:

Current task status

void set_status(TaskStatus new_status) const noexcept#

Set task status

Parameters:

new_status[in] New status to set

bool is_cancelled() const noexcept#

Check if task is cancelled

Returns:

true if task is cancelled, false otherwise

bool has_no_parents() const noexcept#

Check if task has no parent dependencies Uses dependency generation for efficient check (generation 0 = no parents)

Returns:

true if task has no parents, false otherwise

bool any_parent_matches(
std::function<bool(TaskStatus)> predicate,
) const noexcept#

Check if any parent matches the given predicate

Parameters:

predicate[in] Function to test each parent status

Returns:

true if any parent matches the predicate, false otherwise

template<std::invocable<TaskStatus> Predicate>
inline bool all_parents_match(
Predicate predicate,
) const noexcept#

Check if all parents match the given predicate

Parameters:

predicate[in] Callable to test each parent status

Returns:

true if all parents match the predicate (or no parents), false otherwise

void add_parent_task(const std::shared_ptr<Task> &parent_task)#

Add parent task for dependency tracking

Parameters:

parent_task[in] Parent task to add as dependency

void reserve_parent_capacity(std::size_t capacity)#

Reserve capacity for parent task statuses vector

Parameters:

capacity[in] Minimum capacity to reserve

void reserve_name_capacity(
std::size_t max_task_name_length,
std::size_t max_graph_name_length,
)#

Reserve capacity for task and graph name strings

Parameters:
  • max_task_name_length[in] Maximum expected task name length

  • max_graph_name_length[in] Maximum expected graph name length

void clear_parent_tasks() noexcept#

Clear all parent task dependencies

template<typename Func>
inline void prepare_for_reuse(
std::string_view new_task_name,
std::string_view new_graph_name,
Func &&new_func,
TaskCategory new_category = TaskCategory{BuiltinTaskCategory::Default},
Nanos new_timeout_ns = Nanos{0},
Nanos new_scheduled_time = Nanos{0},
const std::any &new_user_data = {},
)#

Prepare task for reuse with new configuration (type-safe function version)

Template Parameters:

Func – Function type that must be invocable with proper signature

Parameters:
  • new_task_name[in] New task name

  • new_graph_name[in] New graph name

  • new_func[in] New function to execute

  • new_category[in] New task category

  • new_timeout_ns[in] New timeout in nanoseconds

  • new_scheduled_time[in] New scheduled execution time

  • new_user_data[in] User-defined data for task context

void prepare_for_reuse(
std::string_view new_task_name,
std::string_view new_graph_name,
const TaskFunction &new_func,
TaskCategory new_category = TaskCategory{BuiltinTaskCategory::Default},
Nanos new_timeout_ns = Nanos{0},
Nanos new_scheduled_time = Nanos{0},
const std::any &new_user_data = {},
)#

Prepare task for reuse with new configuration (TaskFunction variant version)

Parameters:
  • new_task_name[in] New task name

  • new_graph_name[in] New graph name

  • new_func[in] New function to execute (supports both signatures)

  • new_category[in] New task category

  • new_timeout_ns[in] New timeout in nanoseconds

  • new_scheduled_time[in] New scheduled execution time

  • new_user_data[in] User-defined data for task context

class TaskBuilder#
#include <task.hpp>

Fluent builder for creating Task objects Provides a fluent interface for setting task properties and dependencies

Public Functions

inline explicit TaskBuilder(std::string task_name)#

Constructor

Parameters:

task_name[in] Name for the task

template<typename Func>
inline TaskBuilder &function(Func &&func)#

Set task function

Template Parameters:

Func – Function type that returns TaskResult with either no parameters or const TaskContext&

Parameters:

func[in] Function to execute

Returns:

Reference to this builder for chaining

TaskBuilder &user_data(std::any data)#

Set user data for task context

Parameters:

data[in] User-defined data to pass to contextual functions. For large objects, use std::shared_ptr<T> to avoid copies

Returns:

Reference to this builder for chaining

template<typename T>
inline TaskBuilder &user_data(T &&data)#

Set user data for task context (template convenience method)

Parameters:

data[in] User-defined data to pass to contextual functions. For large objects, use std::shared_ptr<T> to avoid copies

Returns:

Reference to this builder for chaining

template<typename Rep, typename Period>
inline TaskBuilder &timeout(
std::chrono::duration<Rep, Period> timeout_duration,
)#

Set task timeout

Template Parameters:
  • Rep – Arithmetic type representing the number of ticks

  • Period – std::ratio representing the tick period

Parameters:

timeout_duration[in] Timeout duration (any std::chrono::duration type)

Returns:

Reference to this builder for chaining

TaskBuilder &category(TaskCategory cat)#

Set task category

Parameters:

cat[in] Task category for worker assignment

Returns:

Reference to this builder for chaining

TaskBuilder &category(BuiltinTaskCategory builtin_cat)#

Set task category from builtin category

Parameters:

builtin_cat[in] Builtin task category for worker assignment

Returns:

Reference to this builder for chaining

template<typename Rep, typename Period>
inline TaskBuilder &scheduled_time(
std::chrono::duration<Rep, Period> scheduled_time_duration,
)#

Set task scheduled time

Template Parameters:
  • Rep – Arithmetic type representing the number of ticks

  • Period – std::ratio representing the tick period

Parameters:

scheduled_time_duration[in] When task should execute (any std::chrono::duration type)

Returns:

Reference to this builder for chaining

TaskBuilder &graph_name(std::string_view graph_name)#

Set graph name

Parameters:

graph_name[in] Name of the graph this task belongs to

Returns:

Reference to this builder for chaining

TaskBuilder &depends_on(std::shared_ptr<Task> parent_task)#

Add parent task dependency

Parameters:

parent_task[in] Shared pointer to parent task

Returns:

Reference to this builder for chaining

TaskBuilder &depends_on(
const std::vector<std::shared_ptr<Task>> &parent_tasks,
)#

Add multiple parent task dependencies

Parameters:

parent_tasks[in] Vector of shared pointers to parent tasks

Returns:

Reference to this builder for chaining

Task build()#

Build the task

Throws:

std::invalid_argument – if task name is empty

Returns:

The created Task object

std::shared_ptr<Task> build_shared()#

Build the task as a shared_ptr

Throws:

std::invalid_argument – if task name is empty

Returns:

Shared pointer to the created Task object

class TaskCategory#
#include <task_category.hpp>

Type-erased task category wrapper

Provides a unified interface for both built-in and user-defined task categories. Can hold any category type declared with DECLARE_TASK_CATEGORIES macro and provides efficient comparison and string conversion.

Public Functions

template<typename EnumType>
inline explicit TaskCategory(
const EnumType value,
)#

Constructor from any wise_enum category type

Parameters:

value[in] Category enum value

inline constexpr std::uint64_t id() const noexcept#

Get category ID

Returns:

Unique category identifier

inline constexpr std::string_view name() const noexcept#

Get category name

Returns:

Category name as string view

inline constexpr auto operator<=>(
const TaskCategory &other,
) const noexcept#

Three-way comparison operator

Parameters:

other[in] TaskCategory to compare with

Returns:

Comparison result based on category IDs

constexpr bool operator==(
const TaskCategory &other,
) const noexcept = default#

Equality comparison

Parameters:

other[in] TaskCategory to compare with

Returns:

true if categories have the same ID

struct TaskContext#
#include <task.hpp>

Execution context passed to task functions Provides access to cancellation and user-defined data

Public Functions

inline explicit TaskContext(
std::shared_ptr<CancellationToken> token,
std::any data = {},
)#

Constructor

Parameters:
  • token[in] Cancellation token for cooperative cancellation

  • data[in] User-defined data for task context

template<typename T>
inline std::optional<T> get_user_data() const#

Helper to safely get user data of specific type

Returns:

Optional containing the data if type matches, nullopt otherwise

Public Members

std::shared_ptr<CancellationToken> cancellation_token#

Cancellation token for cooperative cancellation.

std::any user_data#

User-defined data for task-specific context. For large objects, use std::shared_ptr<T> to avoid copies

class TaskErrorCategory : public std::error_category#
#include <task_errors.hpp>

Custom error category for task framework errors

This class provides human-readable error messages and integrates task errors with the standard C++ error handling system.

Public Functions

inline const char *name() const noexcept override#

Get the name of this error category

Returns:

The category name as a C-style string

inline std::string message(const int condition) const override#

Get a descriptive message for the given error code

Parameters:

condition[in] The error code value

Returns:

A descriptive error message

inline std::error_condition default_error_condition(
const int condition,
) const noexcept override#

Map task errors to standard error conditions where applicable

Parameters:

condition[in] The error code value

Returns:

The equivalent standard error condition, or a default-constructed condition

Public Static Functions

static inline const char *name(const int condition)#

Get the name of the error code enum value

Parameters:

condition[in] The error code value

Returns:

The enum name as a string

struct TaskExecutionRecord#
#include <task_monitor.hpp>

Task execution record for detailed statistics Stores comprehensive execution data for performance analysis

Public Members

std::string task_name#

Task name (copied for persistence)

std::string graph_name#

Graph name (copied for persistence)

std::uint32_t dependency_generation = {}#

Dependency generation (copied for persistence)

std::uint64_t times_scheduled = {}#

Number of times the task’s graph has been scheduled (copied for persistence)

Nanos scheduled_time = {}#

Originally scheduled time (copied for persistence)

WorkerId worker = {}#

Worker that executed task.

Nanos start_time = {}#

Actual start time.

Nanos end_time = {}#

Completion time.

Nanos jitter_ns = {}#

Scheduling jitter.

Nanos duration_ns = {}#

Execution duration.

TaskStatus status = {TaskStatus::Completed}#

Final execution status.

bool was_cancelled = {}#

Whether task was cancelled.

class TaskGraph#
#include <task_graph.hpp>

Task graph for managing complex task relationships Provides fluent interface for building task graphs with dependencies

Public Functions

inline explicit TaskGraph(const std::string_view graph_name)#

Constructor with graph name

Parameters:

graph_name[in] Name for this graph

inline TaskGraph(
const std::string_view graph_name,
const std::size_t task_pool_capacity_multiplier,
)#

Constructor with graph name and task pool capacity multiplier

Parameters:
  • graph_name[in] Name for this graph

  • task_pool_capacity_multiplier[in] Multiplier for TaskPool capacity

inline const std::string &get_graph_name() const noexcept#

Get graph name

Returns:

Graph name

inline std::size_t get_task_pool_capacity() const noexcept#

Get task pool capacity

Note: Actual capacity may be larger than (task_count × multiplier) due to BoundedQueue rounding up to the next power of 2 for performance.

Returns:

Current TaskPool capacity (rounded to power of 2), or 0 if not built

void clear_scheduled_tasks()#

Force cleanup of scheduled tasks to break circular references Call this to ensure all tasks are properly released back to pool

TaskGraphBuilder register_task(const std::string_view task_name)#

Register new task builder for multi-task graphs

Parameters:

task_name[in] Name for the task

Returns:

TaskGraphBuilder instance for fluent task creation

inline const std::vector<SchedulableTask> &get_task_specs(
) const noexcept#

Get task specifications for scheduling

Returns:

Vector of schedulable task specifications

void clear()#

Clear the entire graph Removes all task specifications, handles, and built state

inline size_t size() const noexcept#

Get number of tasks in graph

Returns:

Number of tasks

inline bool empty() const noexcept#

Check if graph is empty

Returns:

true if graph has no tasks

bool task_has_status(
const std::string_view name,
TaskStatus expected_status,
) const#

Check if a task has the specified status

Parameters:
  • name[in] Task name

  • expected_status[in] Expected status to check against

Throws:

std::runtime_error – if graph has not been built

Returns:

true if task exists and has the expected status, false otherwise

bool set_task_status(
const std::string_view name,
TaskStatus new_status,
)#

Set the status of a specific task

Note

When setting TaskStatus::Cancelled, this will properly cancel the task using the cancel() method to set both status and cancellation token

Parameters:
  • name[in] Task name to modify

  • new_status[in] New status to set

Throws:

std::runtime_error – if graph has not been built

Returns:

true if task exists and status was set, false if task not found

void build()#

Build/finalize the task graph for optimized scheduling Pre-processes all dependencies, creates task wrappers, and allocates status handles Must be called before scheduling tasks for optimal performance

Throws:

std::runtime_error – if graph has circular dependencies or other issues

inline bool is_built() const noexcept#

Check if graph has been built for optimized scheduling

Returns:

true if graph has been built

inline std::uint64_t increment_times_scheduled() noexcept#

Get and increment times scheduled for this graph

Returns:

Current times scheduled count (increments for each schedule call)

inline std::uint64_t get_times_scheduled() const noexcept#

Get current times scheduled count without incrementing

Returns:

Current times scheduled count

std::vector<std::shared_ptr<Task>> &prepare_tasks(
Nanos execution_time,
)#

Acquire tasks from pool for scheduling current scheduling round

Populates scheduled_tasks_ with fresh tasks from the pool

Warning

This method is NOT thread-safe. Concurrent calls will cause race conditions in scheduled_tasks_ vector manipulation and task dependency setup. External synchronization is required for concurrent access.

Parameters:

execution_time[in] Execution time for the tasks

Throws:

std::runtime_error – if graph has not been built

Returns:

Reference to scheduled_tasks_ vector

template<typename Rep, typename Period>
inline std::vector<std::shared_ptr<Task>> &prepare_tasks(
std::chrono::duration<Rep, Period> execution_time_duration,
)#

Acquire tasks from pool for scheduling current scheduling round (template version)

Populates scheduled_tasks_ with fresh tasks from the pool

Warning

This method is NOT thread-safe. Concurrent calls will cause race conditions in scheduled_tasks_ vector manipulation and task dependency setup. External synchronization is required for concurrent access.

Template Parameters:
  • Rep – Arithmetic type representing the number of ticks

  • Period – std::ratio representing the tick period

Parameters:

execution_time_duration[in] Execution time for the tasks (any std::chrono::duration type)

Throws:

std::runtime_error – if graph has not been built

Returns:

Reference to scheduled_tasks_ vector

std::string to_string() const#

Generate string visualization of task dependency graph

Returns:

String representation of the task graph

TaskPoolStats get_pool_stats() const#

Get task pool statistics (only available after build())

Throws:

std::runtime_error – if graph has not been built

Returns:

TaskPool statistics

bool disable_task(const std::string_view task_name)#

Disable a task from being scheduled in future scheduling rounds

Parameters:

task_name[in] Name of the task to disable

Returns:

true if task was found and disabled, false if task not found

bool enable_task(const std::string_view task_name)#

Enable a previously disabled task for scheduling

Parameters:

task_name[in] Name of the task to enable

Returns:

true if task was found and enabled, false if task not found

bool is_task_disabled(const std::string_view task_name) const#

Check if a task is currently disabled

Parameters:

task_name[in] Name of the task to check

Returns:

true if task is disabled, false if enabled or not found

bool is_task_or_parent_disabled(std::size_t task_index) const#

Check if a task or any of its parents is disabled

Parameters:

task_index[in] Index of the task to check

Returns:

true if task or any parent is disabled

Public Static Functions

static SingleTaskGraphBuilder create(
const std::string_view graph_name,
)#

Create a single-task graph builder with graph name

Parameters:

graph_name[in] Name for the graph

Returns:

SingleTaskGraphBuilder instance for fluent single-task creation

Friends

friend class TaskGraphBuilder
class TaskGraphBuilder#
#include <task_graph.hpp>

Graph-specific builder for creating task specifications in a TaskGraph Provides a fluent interface for setting task properties with name-based dependencies

Public Functions

TaskGraphBuilder(TaskGraph &graph, const std::string_view task_name)#

Constructor

Parameters:
  • graph[in] Reference to parent task graph

  • task_name[in] Name for the task

TaskGraphBuilder(const TaskGraphBuilder&) = delete#
TaskGraphBuilder &operator=(const TaskGraphBuilder&) = delete#
TaskGraphBuilder(TaskGraphBuilder&&) = delete#
TaskGraphBuilder &operator=(TaskGraphBuilder&&) = delete#
~TaskGraphBuilder() = default#
template<typename Rep, typename Period>
inline TaskGraphBuilder &timeout(
std::chrono::duration<Rep, Period> timeout_duration,
)#

Set task execution timeout

Template Parameters:
  • Rep – Arithmetic type representing the number of ticks

  • Period – std::ratio representing the tick period

Parameters:

timeout_duration[in] Maximum execution duration (any std::chrono::duration type)

Returns:

Reference to this builder for chaining

TaskGraphBuilder &category(TaskCategory cat)#

Set task category

Parameters:

cat[in] Task category for worker assignment

Returns:

Reference to this builder for chaining

TaskGraphBuilder &category(BuiltinTaskCategory builtin_cat)#

Set task category from builtin category

Parameters:

builtin_cat[in] Builtin task category for worker assignment

Returns:

Reference to this builder for chaining

TaskGraphBuilder &task_pool_capacity_multiplier(
const std::size_t multiplier,
)#

Set task pool capacity multiplier for the parent graph

Multiplies the total tasks in the graph to get the total tasks in the reuse pool. Higher values reduce heap allocations during bursty execution patterns at cost of memory usage.

Parameters:

multiplier[in] Multiplier for TaskPool capacity calculation

Returns:

Reference to this builder for chaining

template<typename Func>
inline TaskGraphBuilder &function(
Func &&func,
)#

Set function for current task Supports function types with no arguments or with TaskContext

Parameters:

func[in] Function to execute

Returns:

Reference to this builder for chaining

TaskGraphBuilder &depends_on(const std::string_view parent_name)#

Add single parent dependency

Parameters:

parent_name[in] Name of parent task

Returns:

Reference to this builder for chaining

TaskGraphBuilder &depends_on(
const std::vector<std::string_view> &parent_names,
)#

Add multiple parent dependencies

Parameters:

parent_names[in] Names of parent tasks

Returns:

Reference to this builder for chaining

TaskGraphBuilder &user_data(std::any data)#

Set user data for task context

Parameters:

data[in] User-defined data to pass to contextual functions. For large objects, use std::shared_ptr<T> to avoid copies

Returns:

Reference to this builder for chaining

template<typename T>
inline TaskGraphBuilder &user_data(T &&data)#

Set user data for task context (template convenience method)

Parameters:

data[in] User-defined data to pass to contextual functions. For large objects, use std::shared_ptr<T> to avoid copies

Returns:

Reference to this builder for chaining

std::string add()#

Add task to graph for multi-task graphs

Returns:

Name of the created task for use in dependencies

class TaskHandle#
#include <task.hpp>

Handle to a scheduled task with reset capability Provides access to task status and reset functionality

Public Functions

inline explicit TaskHandle(std::shared_ptr<Task> task)#

Constructor

Parameters:

task[in] Task instance (must be valid)

Throws:

std::invalid_argument – if task is nullptr

inline Task *operator->() const#

Arrow operator for direct access to Task methods

Returns:

Pointer to the underlying Task object

class TaskMonitor#
#include <task_monitor.hpp>

Lock-free task monitor using producer-consumer pattern

Provides non-blocking task monitoring with event-based communication, performance statistics, timeout detection, and dependency visualization.

Public Functions

explicit TaskMonitor(
std::optional<std::size_t> max_execution_records = std::nullopt,
)#

Constructor initializes monitor

Note

Thread Safety: Not thread-safe. Must be called from single thread.

Parameters:

max_execution_records[in] Maximum number of execution records to keep (nullopt for unlimited)

~TaskMonitor() noexcept#

Destructor ensures clean shutdown

Automatically calls stop() to ensure background thread is properly terminated before object destruction. Safe to call even if stop() was already called explicitly.

Note

Thread Safety: Safe. Multiple calls to stop() are handled gracefully.

TaskMonitor(const TaskMonitor&) = delete#
TaskMonitor &operator=(const TaskMonitor&) = delete#
TaskMonitor(TaskMonitor&&) = delete#
TaskMonitor &operator=(TaskMonitor&&) = delete#
std::error_code start(
std::optional<std::uint32_t> core_id = std::nullopt,
std::chrono::microseconds sleep_duration = DEFAULT_MONITOR_SLEEP_US,
)#

Start monitoring thread

Note

Thread Safety: Not thread-safe. Must be called from single thread before any monitoring operations begin.

Parameters:
  • core_id[in] CPU core to pin monitor thread to (nullopt for no pinning)

  • sleep_duration[in] Sleep duration between monitoring cycles

Returns:

Error code indicating success or failure

void stop() noexcept#

Stop monitoring thread

Blocks until background thread terminates. Safe to call multiple times. Automatically called by destructor.

Note

Thread Safety: Can be called safely from multiple threads. Subsequent calls after first are no-ops.

void clear_stats()#

Clear execution statistics

Clears all stored execution records and resets statistics counters. Task registration data is preserved for continued monitoring.

Note

Thread Safety: Thread-safe. Uses mutex protection and can be called safely while monitoring is active. Only clears execution history, leaving active task monitoring data intact.

std::error_code register_task(const TaskHandle &task_handle)#

Register task for monitoring (real-time safe)

TaskHandle is copied, so no lifetime concerns for the caller.

Note

Thread Safety: Thread-safe and real-time safe. Uses lock-free queue for communication with background thread. Can be called concurrently from multiple threads.

Parameters:

task_handle[in] TaskHandle to register for monitoring

Returns:

Error code indicating success or failure

std::error_code record_start(
std::uint64_t task_id,
WorkerId worker_id,
Nanos start_time,
)#

Record task execution start (real-time safe)

Note

Thread Safety: Thread-safe and real-time safe. Uses lock-free queue for communication with background thread. Can be called concurrently from multiple threads.

Parameters:
  • task_id[in] Task ID (guaranteed unique 64-bit identifier)

  • worker_id[in] Worker executing the task

  • start_time[in] Execution start timestamp

Returns:

Error code indicating success or failure

std::error_code record_end(
std::uint64_t task_id,
Nanos end_time,
TaskStatus status = TaskStatus::Completed,
)#

Record task execution completion (real-time safe)

Note

Thread Safety: Thread-safe and real-time safe. Uses lock-free queue for communication with background thread. Can be called concurrently from multiple threads.

Parameters:
  • task_id[in] Task ID (guaranteed unique 64-bit identifier)

  • end_time[in] Completion timestamp

  • status[in] Final execution status

Returns:

Error code indicating success or failure

std::error_code cancel_task(std::uint64_t task_id)#

Cancel a task (real-time safe)

Note

Thread Safety: Thread-safe and real-time safe. Uses lock-free queue for communication with background thread. Can be called concurrently from multiple threads.

Parameters:

task_id[in] Task ID (guaranteed unique 64-bit identifier)

Returns:

Error code indicating success or failure

void print_summary() const#

Print execution statistics summary

Note

Thread Safety: Thread-safe. Uses mutex protection and can be called safely while monitoring is active.

std::error_code write_stats_to_file(
const std::string &filename,
TraceWriteMode mode = TraceWriteMode::Overwrite,
) const#

Write execution statistics to JSON file for later post-processing Each execution record is written as one JSON object per line

Note

Thread Safety: Thread-safe. Uses mutex protection and can be called safely while monitoring is active.

Parameters:
  • filename[in] Output file path

  • mode[in] Write mode (OVERWRITE or APPEND)

Returns:

Error code indicating success or failure

std::error_code write_chrome_trace_to_file(
const std::string &filename,
TraceWriteMode mode = TraceWriteMode::Overwrite,
) const#

Write execution statistics to Chrome trace format file Each execution record is written as one Chrome trace event per line

Note

Thread Safety: Thread-safe. Uses mutex protection and can be called safely while monitoring is active.

Parameters:
  • filename[in] Output file path

  • mode[in] Write mode (OVERWRITE or APPEND)

Returns:

Error code indicating success or failure

Public Static Attributes

static constexpr std::chrono::microseconds DEFAULT_MONITOR_SLEEP_US = {1000}#

Default monitoring sleep duration in microseconds.

struct TaskMonitorData#
#include <task_monitor.hpp>

Consolidated task monitoring data.

Public Functions

TaskMonitorData() = default#

Default constructor (for container compatibility)

inline explicit TaskMonitorData(const TaskHandle &handle)#

Constructor from TaskHandle

Parameters:

handle[in] Task handle to monitor

Public Members

std::optional<TaskHandle> task_handle#

Handle to task (contains all task metadata)

Nanos start_time = {}#

Actual execution start time.

WorkerId worker = {}#

Worker assignment.

bool cancelled = {false}#

Cancellation flag.

class TaskPool : public std::enable_shared_from_this<TaskPool>#
#include <task_pool.hpp>

Thread-safe object pool for Task instances

Uses a BoundedQueue for lock-free pooling with automatic fallback to heap allocation when pool is exhausted. Provides RAII semantics through custom shared_ptr deleters.

Public Functions

~TaskPool() = default#

Default destructor.

TaskPool(const TaskPool&) = delete#
TaskPool &operator=(const TaskPool&) = delete#
TaskPool(TaskPool&&) = delete#
TaskPool &operator=(TaskPool&&) = delete#
std::shared_ptr<Task> acquire_task(
std::string_view task_name,
std::string_view graph_name,
)#

Acquire a Task from the pool or create new one

Parameters:
  • task_name[in] Task name

  • graph_name[in] Graph name

Returns:

Shared pointer to Task with custom deleter for pool return

TaskPoolStats get_stats() const noexcept#

Get pool statistics

Returns:

TaskPoolStats with performance metrics

std::size_t capacity() const noexcept#

Get current pool capacity

Returns:

Maximum number of tasks the pool can hold

Public Static Functions

static std::shared_ptr<TaskPool> create(
std::size_t initial_size = DEFAULT_POOL_SIZE,
std::size_t max_task_parents = DEFAULT_MAX_TASK_PARENTS,
std::size_t max_task_name_length = DEFAULT_MAX_TASK_NAME_LENGTH,
std::size_t max_graph_name_length = DEFAULT_MAX_GRAPH_NAME_LENGTH,
)#

Factory function to create TaskPool managed by shared_ptr

This ensures TaskPool is always managed by shared_ptr, enabling safe lifetime management for tasks returned to the pool.

Parameters:
  • initial_size[in] Initial pool capacity (will be rounded up to power of 2)

  • max_task_parents[in] Maximum expected parents per task for capacity reservation

  • max_task_name_length[in] Maximum expected task name length for string capacity reservation

  • max_graph_name_length[in] Maximum expected graph name length for string capacity reservation

Throws:

std::bad_alloc – if pool cannot be properly initialized to requested capacity

Returns:

Shared pointer to TaskPool instance

struct TaskPoolStats#
#include <task_pool.hpp>

Statistics for TaskPool performance monitoring

Public Functions

inline std::uint64_t total_tasks_served() const noexcept#

Get total tasks served by pool

Returns:

Sum of pool hits and newly created tasks

inline double hit_rate_percent() const noexcept#

Get instantaneous pool hit rate as percentage

Returns:

Current hit rate (0.0 to 100.0) based on cumulative stats snapshot, or 0.0 if no tasks have been served yet

Public Members

std::uint64_t pool_hits = {}#

Tasks served from pool (reused existing tasks)

std::uint64_t new_tasks_created = {}#

Tasks created new when pool was empty.

std::uint64_t tasks_released = {}#

Tasks returned to pool for reuse.

struct TaskResult#
#include <task.hpp>

Result returned by task execution Contains status information and optional message for diagnostics

Public Functions

TaskResult() = default#

Default constructor creates successful result.

inline explicit TaskResult(
const TaskStatus s,
const std::string_view msg = "",
)#

Constructor with status and message

Parameters:
  • s[in] Task execution status

  • msg[in] Optional message describing the result

inline bool is_success() const noexcept#

Check if task completed successfully

Returns:

true if task completed successfully, false otherwise

Public Members

TaskStatus status = {TaskStatus::Completed}#

Execution status.

std::string message#

Optional message for details/errors.

class TaskScheduler#
#include <task_scheduler.hpp>

High-performance task scheduler with category-based worker assignment

Manages multiple worker threads that execute tasks based on categories, with support for real-time scheduling, core pinning, and precise timing. Uses lock-free monitoring and efficient task distribution.

Use TaskSchedulerBuilder to construct instances with the builder pattern.

Public Functions

~TaskScheduler()#

Destructor - stops all threads and prints statistics.

TaskScheduler(const TaskScheduler&) = delete#
TaskScheduler &operator=(const TaskScheduler&) = delete#
TaskScheduler(TaskScheduler&&) = delete#
TaskScheduler &operator=(TaskScheduler&&) = delete#
inline const WorkersConfig &get_workers_config() const noexcept#

Get current worker configuration

Returns:

Const reference to workers configuration

inline void print_monitor_stats() const#

Print task monitor execution statistics

inline std::error_code write_monitor_stats_to_file(
const std::string &filename,
TraceWriteMode mode = TraceWriteMode::Overwrite,
) const#

Write task monitor execution statistics to file for later post-processing

Parameters:
  • filename[in] Output file path

  • mode[in] Write mode (OVERWRITE or APPEND)

Returns:

Error code indicating success or failure

inline std::error_code write_chrome_trace_to_file(
const std::string &filename,
TraceWriteMode mode = TraceWriteMode::Overwrite,
) const#

Write task monitor execution statistics to Chrome trace format file

Parameters:
  • filename[in] Output file path

  • mode[in] Write mode (OVERWRITE or APPEND)

Returns:

Error code indicating success or failure

void schedule(
TaskGraph &graph,
const Nanos execution_time = Time::now_ns(),
)#

Schedule a task graph with dependencies

Warning

This method is NOT thread-safe for concurrent calls with the SAME TaskGraph instance. Multiple threads can safely call schedule() concurrently with DIFFERENT TaskGraph instances, but concurrent calls with the same TaskGraph must be externally synchronized to prevent race conditions in TaskGraph::prepare_tasks().

Parameters:
  • graph[in] Task graph containing task specifications (must be built)

  • execution_time[in] Base execution time for all tasks (defaults to now)

Throws:

std::runtime_error – if graph has not been built

template<typename Rep, typename Period>
inline void schedule(
TaskGraph &graph,
std::chrono::duration<Rep, Period> execution_time_duration,
)#

Schedule a task graph with dependencies (template version)

Warning

This method is NOT thread-safe for concurrent calls with the SAME TaskGraph instance. Multiple threads can safely call schedule() concurrently with DIFFERENT TaskGraph instances, but concurrent calls with the same TaskGraph must be externally synchronized to prevent race conditions in TaskGraph::prepare_tasks().

Template Parameters:
  • Rep – Arithmetic type representing the number of ticks

  • Period – std::ratio representing the tick period

Parameters:
  • graph[in] Task graph containing task specifications (must be built)

  • execution_time_duration[in] Base execution time for all tasks (any std::chrono::duration type)

Throws:

std::runtime_error – if graph has not been built

void start_workers()#

Start all worker threads and wait for them to be ready Should be called after constructor before scheduling tasks

void join_workers(
WorkerShutdownBehavior behavior = WorkerShutdownBehavior::FinishPendingTasks,
)#

Join all worker threads after setting stop flag Should be called to cleanly shutdown workers

Parameters:

behavior[in] How to handle pending tasks during shutdown

Public Static Functions

static TaskSchedulerBuilder create()#

Create a TaskSchedulerBuilder for fluent configuration

Returns:

TaskSchedulerBuilder instance for method chaining

class TaskSchedulerBuilder#
#include <task_scheduler.hpp>

Builder for TaskScheduler configuration

Provides a fluent interface for configuring TaskScheduler instances with sensible defaults and method chaining for optional parameters.

Public Functions

TaskSchedulerBuilder()#

Default constructor with sensible defaults.

TaskSchedulerBuilder &workers(const WorkersConfig &config)#

Set worker configuration

Parameters:

config[in] Worker configuration

Returns:

Reference to this builder for chaining

TaskSchedulerBuilder &workers(std::size_t num_workers)#

Set worker configuration with number of workers

Parameters:

num_workers[in] Number of worker threads

Returns:

Reference to this builder for chaining

template<typename Rep, typename Period>
inline TaskSchedulerBuilder &task_readiness_tolerance(
std::chrono::duration<Rep, Period> readiness_tolerance_duration,
)#

Set task readiness tolerance for early task execution

Template Parameters:
  • Rep – Arithmetic type representing the number of ticks

  • Period – std::ratio representing the tick period

Parameters:

readiness_tolerance_duration[in] Tolerance window for readiness checking - allows task to be considered ready this amount of time before its scheduled time. This tolerance window accounts for scheduling jitter and ensures tasks can be queued for execution slightly before their exact scheduled time, preventing delays due to timing precision limitations.

Returns:

Reference to this builder for chaining

TaskSchedulerBuilder &monitor_core(std::uint32_t core_id)#

Set monitor thread core pinning

Parameters:

core_id[in] Core ID for monitor thread

Throws:

std::invalid_argument – if core_id >= hardware_concurrency

Returns:

Reference to this builder for chaining

TaskSchedulerBuilder &no_monitor_pinning()#

Disable monitor thread core pinning

Returns:

Reference to this builder for chaining

template<typename Rep, typename Period>
inline TaskSchedulerBuilder &worker_sleep(
std::chrono::duration<Rep, Period> sleep_duration,
)#

Set worker sleep duration when no tasks are available

Template Parameters:
  • Rep – Arithmetic type representing the number of ticks

  • Period – std::ratio representing the tick period

Parameters:

sleep_duration[in] Sleep duration (any std::chrono::duration type)

Returns:

Reference to this builder for chaining

template<typename Rep, typename Period>
inline TaskSchedulerBuilder &worker_blackout_warn_threshold(
std::chrono::duration<Rep, Period> threshold_duration,
)#

Set worker blackout warning threshold

Configures the maximum allowed gap between worker thread polling events before logging a blackout warning. A blackout occurs when a worker thread is blocked or delayed for longer than this threshold, indicating potential performance issues or system contention.

Template Parameters:
  • Rep – Arithmetic type representing the number of ticks

  • Period – std::ratio representing the tick period

Parameters:

threshold_duration[in] Maximum gap before warning (any std::chrono::duration type)

Returns:

Reference to this builder for chaining

TaskSchedulerBuilder &auto_start()#

Enable automatic worker startup during construction

Returns:

Reference to this builder for chaining

TaskSchedulerBuilder &manual_start()#

Require manual worker startup (call start_workers() explicitly)

Returns:

Reference to this builder for chaining

TaskSchedulerBuilder &max_tasks_per_category(
std::uint32_t tasks_per_category,
)#

Set maximum tasks per category for queue preallocation

Parameters:

tasks_per_category[in] Number of tasks to preallocate per category queue

Returns:

Reference to this builder for chaining

TaskSchedulerBuilder &max_execution_records(std::size_t max_records)#

Set maximum execution records for TaskMonitor (omit for auto-calculate to 50GB)

Parameters:

max_records[in] Maximum records to keep

Returns:

Reference to builder for chaining

TaskScheduler build()#

Build the TaskScheduler with configured parameters

Throws:

std::invalid_argument – if configuration is invalid

Returns:

Constructed TaskScheduler instance

Public Static Attributes

static constexpr Nanos DEFAULT_TASK_READINESS_TOLERANCE_NS = {300'000}#

Default task readiness tolerance (300 microseconds)

static constexpr Nanos DEFAULT_WORKER_SLEEP_NS = {10'000}#

Default worker sleep duration (10 microseconds)

static constexpr std::chrono::microseconds DEFAULT_WORKER_BLACKOUT_WARN_THRESHOLD = {250}#

Default worker blackout warning threshold (250 microseconds)

class TaskVisualizer#
#include <task_visualizer.hpp>

Task visualization class for generating ASCII art representations of task graphs

Creates ASCII art visualization to display task dependencies and categories in a hierarchical tree structure.

Public Functions

void add_node(
const std::string &name,
const TaskCategory category,
const std::string &tooltip = "",
)#

Add a task node to the graph

Parameters:
  • name[in] Task name

  • category[in] Task category for styling

  • tooltip[in] Optional tooltip text

void add_edge(
const std::string &from,
const std::string &to,
const std::string &label = "",
)#

Add a dependency edge between tasks

Parameters:
  • from[in] Source task name (dependency)

  • to[in] Destination task name (dependent)

  • label[in] Optional edge label

inline void set_title(const std::string &title)#

Set graph title

Parameters:

title[in] Graph title text

void clear() noexcept#

Clear all nodes and edges Resets the graph to empty state

inline std::size_t get_node_count() const noexcept#

Get number of nodes in the graph

Returns:

Number of task nodes

inline std::size_t get_edge_count() const noexcept#

Get number of edges in the graph

Returns:

Number of dependency edges

std::string to_string() const#

Generate string visualization of the task graph Creates an ASCII representation of the task graph structure

Returns:

String representation of the graph

struct ThreadConfig#
#include <task_utils.hpp>

Thread configuration parameters

Public Members

std::optional<std::uint32_t> core_id#

CPU core ID to pin thread to (nullopt = no pinning)

std::optional<std::uint32_t> priority#

Thread priority level (1-99, nullopt = normal scheduling)

class Time#
#include <time.hpp>

High-precision timing utilities for real-time task scheduling

Provides nanosecond precision timing and sleep functionality optimized for low-latency task execution.

Public Types

using TimePoint = std::chrono::system_clock::time_point#

System clock time point for time measurements.

Public Static Functions

static Nanos now_ns()#

Get current time in nanoseconds

Returns:

Current time in nanoseconds since epoch

static TimePoint now()#

Get current time as chrono time point

Returns:

Current time point using system_clock

static void sleep_until(Nanos target_time_ns)#

Sleep until the specified target time

Uses hybrid approach: system sleep for longer waits, then busy-wait for precision in the final microseconds.

Parameters:

target_time_ns – Target time in nanoseconds since epoch

static void sleep_until(TimePoint target_time)#

Sleep until the specified target time point

Parameters:

target_time – Target time point to sleep until

static inline void cpu_pause() noexcept#

Architecture-specific CPU pause/yield instruction Reduces power consumption and improves performance in spin loops

class TimedTrigger#
#include <timed_trigger.hpp>

High-precision periodic trigger with comprehensive timing analysis

Provides nanosecond-precision periodic callbacks with real-time optimization. Features include jump detection, latency monitoring, and detailed percentile statistics following TaskMonitor patterns.

Public Types

using CallbackType = std::function<void()>#

Callback function type for trigger execution.

Public Functions

~TimedTrigger()#

Destructor ensures clean shutdown.

TimedTrigger(const TimedTrigger&) = delete#
TimedTrigger &operator=(const TimedTrigger&) = delete#
TimedTrigger(TimedTrigger&&) = delete#
TimedTrigger &operator=(TimedTrigger&&) = delete#
std::error_code start(Nanos start_time = Time::now_ns())#

Start the timed trigger

Note

TimedTrigger cannot be restarted after stop(). Create new instance instead.

Parameters:

start_time[in] When to start the trigger (default: now)

Returns:

Error code indicating success or failure

void stop()#

Stop the timed trigger.

bool is_running() const noexcept#

Check if trigger is currently running

Returns:

True if trigger is running, false otherwise

void wait_for_completion(
std::optional<std::reference_wrapper<std::atomic_bool>> stop_flag = std::nullopt,
)#

Wait for trigger to complete execution

Blocks until the trigger finishes (e.g., max_triggers reached) or until the optional stop flag is set. Automatically joins threads and finalizes statistics.

Note

When using stop_flag, the caller should use at least std::memory_order_release when setting it to true to ensure proper synchronization. This method uses std::memory_order_seq_cst for safety.

Parameters:

stop_flag[in] Optional reference to atomic bool for external stop signal (e.g., from signal handler). If not provided, only waits for trigger completion.

Throws:

std::logic_error – if max_triggers is not set and stop_flag is not provided

bool is_pinned() const noexcept#

Check if tick thread is pinned to specific core

Returns:

True if thread is pinned, false otherwise

std::uint32_t get_core_id() const#

Get core ID for pinned tick threads

Returns:

Core ID if pinned, throws if not pinned

bool is_stats_pinned() const noexcept#

Check if stats thread is pinned to specific core

Returns:

True if stats thread is pinned, false otherwise

std::uint32_t get_stats_core_id() const#

Get core ID for pinned stats threads

Returns:

Core ID if stats thread is pinned, throws if not pinned

bool has_thread_priority() const noexcept#

Check if thread uses RT priority

Returns:

True if thread has RT priority, false otherwise

std::uint32_t get_thread_priority() const#

Get thread priority level

Returns:

Thread priority (1-99), throws if no priority set

std::chrono::nanoseconds get_interval() const noexcept#

Get trigger interval

Returns:

Trigger interval in nanoseconds

std::optional<std::size_t> max_triggers() const noexcept#

Get maximum trigger count

Returns:

Maximum triggers if set, std::nullopt otherwise

void clear_stats()#

Clear execution statistics.

void print_summary() const#

Print comprehensive execution statistics.

std::error_code write_stats_to_file(
const std::string &filename,
TraceWriteMode mode = TraceWriteMode::Overwrite,
) const#

Write execution statistics to JSON file

Parameters:
  • filename[in] Output file path

  • mode[in] Write mode (OVERWRITE or APPEND)

Returns:

Error code indicating success or failure

int write_chrome_trace_to_file(
const std::string &filename,
TraceWriteMode mode = TraceWriteMode::Overwrite,
) const#

Write execution statistics to Chrome trace format file

Parameters:
  • filename[in] Output file path

  • mode[in] Write mode (OVERWRITE or APPEND)

Returns:

Error code indicating success or failure

Public Static Functions

template<TimedTriggerCallback CallbackT, typename Rep, typename Period>
static inline Builder create(
CallbackT &&callback,
std::chrono::duration<Rep, Period> interval,
)#

Create builder for periodic trigger

Parameters:
  • callback[in] Function to execute on each trigger (must satisfy TimedTriggerCallback concept)

  • interval[in] Trigger interval (any std::chrono::duration type)

Returns:

Builder instance

class Builder#
#include <timed_trigger.hpp>

Builder pattern for safe TimedTrigger construction

Public Functions

template<TimedTriggerCallback CallbackT, typename Rep, typename Period>
inline Builder(
CallbackT &&callback,
std::chrono::duration<Rep, Period> interval,
)#

Create builder with required parameters

Parameters:
  • callback[in] Function to execute on each trigger (must satisfy TimedTriggerCallback concept)

  • interval[in] Trigger interval (any std::chrono::duration type)

Builder &pin_to_core(std::uint32_t core)#

Pin trigger to specific CPU core

Parameters:

core[in] CPU core ID

Throws:

std::invalid_argument – if core >= hardware_concurrency

Returns:

Reference to builder for chaining

Builder &with_rt_priority(std::uint32_t priority) noexcept#

Set real-time thread priority

Parameters:

priority[in] RT priority (1-99, higher = more priority)

Returns:

Reference to builder for chaining

Builder &enable_statistics(bool enabled = true) noexcept#

Enable or disable statistics collection

Parameters:

enabled[in] Whether to collect statistics

Returns:

Reference to builder for chaining

template<typename Rep, typename Period>
inline Builder &with_latency_threshold(
std::chrono::duration<Rep, Period> threshold,
) noexcept#

Set custom latency warning threshold

Parameters:

threshold[in] Custom threshold (0 = auto-calculate, any std::chrono::duration type)

Returns:

Reference to builder for chaining

template<typename Rep, typename Period>
inline Builder &with_jump_threshold(
std::chrono::duration<Rep, Period> threshold,
) noexcept#

Set custom jump detection threshold

Parameters:

threshold[in] Custom threshold (0 = auto-calculate, any std::chrono::duration type)

Returns:

Reference to builder for chaining

template<typename Rep, typename Period>
inline Builder &with_callback_duration_threshold(
std::chrono::duration<Rep, Period> threshold,
) noexcept#

Set custom callback duration warning threshold

Parameters:

threshold[in] Custom threshold (0 = auto-calculate, any std::chrono::duration type)

Returns:

Reference to builder for chaining

Builder &with_skip_strategy(SkipStrategy strategy) noexcept#

Set skip strategy for handling missed trigger windows

Parameters:

strategy[in] Skip strategy (default: CatchupAll)

Returns:

Reference to builder for chaining

Builder &with_stats_core(std::uint32_t core_id)#

Set CPU core for stats thread pinning

Parameters:

core_id[in] Core ID to pin stats thread to

Throws:

std::invalid_argument – if core_id >= hardware_concurrency()

Returns:

Reference to builder for chaining

Builder &with_max_execution_records(
std::size_t max_records,
) noexcept#

Set maximum execution records (omit for auto-calculate to 50GB)

Parameters:

max_records[in] Maximum records to keep

Returns:

Reference to builder for chaining

Builder &max_triggers(std::size_t count) noexcept#

Set maximum number of triggers (auto-stop after count reached)

Parameters:

count[in] Maximum triggers to execute

Returns:

Reference to builder for chaining

TimedTrigger build()#

Build final TimedTrigger with auto-calculated thresholds

Returns:

Fully configured TimedTrigger

struct CurrentTriggerData#
#include <timed_trigger.hpp>

Current trigger tracking for building execution records.

Public Members

Nanos scheduled_time = {}#

When trigger was scheduled to fire.

Nanos actual_start_time = {}#

When trigger actually started.

Nanos callback_start_time = {}#

When callback execution began.

std::uint64_t trigger_count = {}#

Sequential trigger number.

std::uint64_t skipped_triggers = {}#

Number of triggers skipped.

bool latency_warning = {false}#

Latency threshold exceeded flag.

bool callback_duration_warning = {false}#

Callback duration threshold exceeded flag.

bool jump_detected = {false}#

Timing jump detected flag.

struct TimingStatistics#
#include <task_utils.hpp>

Timing statistics result structure Contains comprehensive timing statistics in microseconds

Public Members

double min_us = {}#

Minimum value in microseconds.

double max_us = {}#

Maximum value in microseconds.

double avg_us = {}#

Average value in microseconds.

double median_us = {}#

Median value in microseconds.

double p95_us = {}#

95th percentile in microseconds

double p99_us = {}#

99th percentile in microseconds

double std_us = {}#

Standard deviation in microseconds.

std::size_t count = {}#

Number of values.

struct TriggerExecutionRecord#
#include <timed_trigger.hpp>

Comprehensive trigger execution record for detailed statistics Stores complete execution data for performance analysis

Public Members

Nanos scheduled_time = {}#

When trigger should have fired.

Nanos actual_time = {}#

When trigger actually fired.

Nanos callback_start_time = {}#

When callback started.

Nanos callback_end_time = {}#

When callback completed.

Nanos latency_ns = {}#

Scheduled vs actual (jitter)

Nanos callback_duration_ns = {}#

Callback execution time.

Nanos inter_trigger_actual = {}#

Actual time since last trigger.

Nanos inter_trigger_expected = {}#

Expected time since last trigger.

std::uint64_t trigger_count = {}#

Sequential trigger number.

std::uint64_t skipped_triggers = {}#

Number of triggers skipped before this one.

bool exceeded_latency_threshold = {false}#

Latency warning flag.

bool exceeded_callback_duration_threshold = {false}#

Callback duration warning flag.

bool jump_detected = {false}#

Jump detection flag.

struct TriggerStatsEvent#
#include <timed_trigger.hpp>

Statistics event for lock-free communication Used to pass timing data from real-time thread to stats thread

Public Members

TriggerEventType type = {TriggerEventType::TriggerStart}#

Event type.

Nanos timestamp = {}#

Event timestamp.

Nanos scheduled_time = {}#

For TriggerStart events.

Nanos callback_duration = {}#

For CallbackEnd events.

Nanos latency = {}#

For LatencyWarning events.

Nanos jump_size = {}#

For JumpDetected events.

Nanos inter_trigger_time = {}#

For JumpDetected events.

std::uint64_t trigger_count = {}#

Sequential trigger number.

std::uint64_t skipped_triggers = {}#

For TriggersSkipped events.

struct TriggerThresholds#
#include <timed_trigger.hpp>

Parameter struct for TimedTrigger constructor thresholds.

Public Members

std::chrono::nanoseconds latency_warning_threshold = {}#

Threshold for triggering latency warnings

std::chrono::nanoseconds jump_detection_threshold = {}#

Threshold for detecting time jumps.

std::chrono::nanoseconds callback_duration_threshold = {}#

Threshold for callback execution duration warnings

struct WorkerConfig#
#include <task_worker.hpp>

Worker configuration for individual worker thread setup

Configures core pinning, thread priority, and task category assignment for individual worker threads in the task scheduler.

Public Functions

inline WorkerConfig()#

Default constructor - no pinning, normal scheduling, all categories.

inline bool is_pinned() const noexcept#

Check if worker is pinned to a specific core

Returns:

true if core pinning is enabled, false otherwise

inline std::optional<std::uint32_t> get_core_id() const noexcept#

Get the core ID for pinned workers

Returns:

Optional core ID (nullopt if worker is not pinned)

inline bool has_thread_priority() const noexcept#

Check if worker uses real-time priority

Returns:

true if RT priority is enabled, false otherwise

inline std::uint32_t get_thread_priority() const#

Get the thread priority level

Returns:

Thread priority (should only be called if has_thread_priority() returns true)

bool is_valid() const#

Validate worker configuration

Returns:

true if configuration is valid, false otherwise

void print(const std::size_t worker_index) const#

Print worker configuration details

Parameters:

worker_index[in] Worker index for display

Public Members

std::optional<std::uint32_t> core_id#

CPU core to pin worker to (nullopt = no pinning)

std::optional<std::uint32_t> thread_priority#

Thread priority level (1-99, higher = more priority)

std::vector<TaskCategory> categories#

Task categories this worker handles.

Public Static Functions

static WorkerConfig create_pinned_rt(
const std::uint32_t core,
const std::uint32_t priority = DEFAULT_PRIORITY,
const std::vector<TaskCategory> &worker_categories = {TaskCategory{BuiltinTaskCategory::Default}},
)#

Create worker with core pinning and thread priority

Parameters:
  • core[in] CPU core to pin to

  • priority[in] Thread priority (1-99)

  • worker_categories[in] Task categories to handle

Returns:

WorkerConfig with pinning and thread priority enabled

static WorkerConfig create_pinned(
const std::uint32_t core,
const std::vector<TaskCategory> &worker_categories = {TaskCategory{BuiltinTaskCategory::Default}},
)#

Create worker with only core pinning (normal scheduling)

Parameters:
  • core[in] CPU core to pin to

  • worker_categories[in] Task categories to handle

Returns:

WorkerConfig with only pinning enabled

static WorkerConfig create_rt_only(
const std::uint32_t priority,
const std::vector<TaskCategory> &worker_categories = {TaskCategory{BuiltinTaskCategory::Default}},
)#

Create worker with thread priority but no pinning

Parameters:
  • priority[in] Thread priority (1-99)

  • worker_categories[in] Task categories to handle

Returns:

WorkerConfig with thread priority enabled

static WorkerConfig create_for_categories(
const std::vector<TaskCategory> &worker_categories,
)#

Create worker for specific categories (normal priority, no pinning)

Parameters:

worker_categories[in] Task categories to handle

Returns:

WorkerConfig for specific categories

Public Static Attributes

static constexpr std::uint32_t DEFAULT_PRIORITY = 50#

Default thread priority level.

struct WorkersConfig#
#include <task_worker.hpp>

Configuration for all workers in the task scheduler

Public Functions

inline explicit WorkersConfig(const std::size_t num_workers = 4)#

Default constructor creates workers with default configuration

Parameters:

num_workers[in] Number of worker threads to create

inline explicit WorkersConfig(
const std::vector<WorkerConfig> &worker_configs,
)#

Constructor with explicit worker configs

Parameters:

worker_configs[in] Vector of worker configurations

bool is_valid() const#

Validate all worker configurations

Returns:

true if all configurations are valid, false otherwise

void print() const#

Print configuration details for all workers.

inline std::size_t size() const noexcept#

Get number of workers

Returns:

Worker count

inline const WorkerConfig &operator[](const std::size_t index) const#

Get worker configuration by index (const)

Parameters:

index[in] Worker index

Throws:

std::out_of_range – if index is out of bounds

Returns:

Const reference to worker configuration

inline WorkerConfig &operator[](const std::size_t index)#

Get worker configuration by index (mutable)

Parameters:

index[in] Worker index

Throws:

std::out_of_range – if index is out of bounds

Returns:

Reference to worker configuration

Public Members

std::vector<WorkerConfig> workers#

Individual worker configurations.

Public Static Functions

static WorkersConfig create_for_categories(
const FlatMap<TaskCategory, std::size_t> &category_workers,
)#

Create configuration with workers for specific categories

Creates workers for the specified categories without core pinning or priority settings. Each worker is assigned to handle exactly one category. The iteration order over categories is non-deterministic but the total count of workers per category is guaranteed.

Parameters:

category_workers[in] Map of categories to number of workers

Returns:

WorkersConfig with unpinned workers for specified categories