NVIDIA Holoscan SDK v3.4.0

Class InferContext

class InferContext

Inference Context class

Public Functions

InferContext()
~InferContext()
InferStatus set_inference_params(std::shared_ptr<InferenceSpecs> &inference_specs)

Set Inference parameters

Parameters

inference_specs – Pointer to inference specifications

Returns

InferStatus with appropriate holoinfer_code and message.

InferStatus execute_inference(std::shared_ptr<InferenceSpecs> &inference_specs, cudaStream_t cuda_stream = 0)

Executes the inference Toolkit supports one input per model, in float32 type. The provided CUDA stream is used to prepare the input data and will be used to operate on the output data, any execution of CUDA work should be in sync with this stream.

Parameters
  • inference_specs – Pointer to inference specifications

  • cuda_stream – CUDA stream

Returns

InferStatus with appropriate holoinfer_code and message.

DimType get_output_dimensions() const

Gets output dimension per model

Returns

Map of model as key mapped to the output dimension (of inferred data)

DimType get_input_dimensions() const

Gets input dimension per model

Returns

Map of model as key mapped to the input dimensions

Previous Class InferBase
Next Class InferStatus
© Copyright 2022-2025, NVIDIA. Last updated on Jul 1, 2025.