What can I help you with?
NVIDIA Holoscan SDK v3.4.0

Class ProcessorContext

class ProcessorContext

Processor Context class

Public Functions

ProcessorContext()
InferStatus initialize(const MultiMappings &process_operations, const Mappings &custom_kernels, bool use_cuda_graphs, const std::string config_path)

Initialize the preprocessor context

Parameters
  • process_operationsMap of tensor name as key, mapped to list of operations to be applied in sequence on the tensor

  • custom_kernelsMap of custom kernel identifier, mapped to related value as a string

  • use_cuda_graphs – Flag to enable CUDA Graphs for processing custom CUDA kernels

  • config_path – Configuration path as a string

Returns

InferStatus with appropriate holoinfer_code and message.

InferStatus process(const MultiMappings &tensor_oper_map, const MultiMappings &in_out_tensor_map, DataMap &processed_result_map, const std::map<std::string, std::vector<int>> &dimension_map, bool process_with_cuda, cudaStream_t cuda_stream = 0)

Process the tensors with operations as initialized. Toolkit supports one tensor input and output per model

Parameters
  • tensor_oper_mapMap of tensor name as key, mapped to list of operations to be applied in sequence on the tensor

  • in_out_tensor_mapMap of input tensor name mapped to vector of output tensor names after processing

  • processed_result_mapMap is updated with output tensor name as key mapped to processed output as a vector of float32 type

  • dimension_mapMap is updated with model name as key mapped to dimension of processed data as a vector

  • process_with_cuda – Flag defining if processing should be done with CUDA

  • cuda_stream – CUDA stream to use when procseeing is done with CUDA

Returns

InferStatus with appropriate holoinfer_code and message.

DataMap get_processed_data() const

Get output data per Tensor Toolkit supports one output per Tensor, in float32 type

Returns

Map of tensor name as key mapped to the output float32 type data as a vector

DimType get_processed_data_dims() const

Get output dimension per model Toolkit supports one output per model

Returns

Map of model as key mapped to the output dimension (of processed data) as a vector

Previous Class Params
Next Template Class ThreadSafeQueue
© Copyright 2022-2025, NVIDIA. Last updated on Jul 1, 2025.