What can I help you with?
NVIDIA Holoscan SDK v3.2.0

Class OnnxInfer

Base Type

class OnnxInfer : public holoscan::inference::InferBase

Onnxruntime based inference class

Public Functions

OnnxInfer(const std::string &model_file_path, bool enable_fp16, int32_t dla_core, bool dla_gpu_fallback, bool cuda_flag, bool cuda_buf_in, bool cuda_buf_out)

Constructor.

Parameters
  • model_file_path – Path to onnx model file

  • enable_fp16 – Flag showing if trt engine file conversion will use FP16.

  • dla_core – The DLA core index to execute the engine on, starts at 0. Set to -1 to disable DLA.

  • dla_gpu_fallback – If DLA is enabled, use the GPU if a layer cannot be executed on DLA. If the fallback is disabled, engine creation will fail if a layer cannot executed on DLA.

  • cuda_flag – Flag to show if inference will happen using CUDA

  • cuda_buf_in – Flag to demonstrate if input data buffer is on cuda

  • cuda_buf_out – Flag to demonstrate if output data buffer will be on cuda

~OnnxInfer()

Destructor.

virtual InferStatus do_inference(const std::vector<std::shared_ptr<DataBuffer>> &input_data, std::vector<std::shared_ptr<DataBuffer>> &output_buffer, cudaEvent_t cuda_event_data, cudaEvent_t *cuda_event_inference)

Does the Core inference using Onnxruntime. Input and output buffer are supported on Host. Inference is supported on host and device. The provided CUDA data event is used to prepare the input data any execution of CUDA work should be in sync with this event. If the inference is using CUDA it should record a CUDA event and pass it back in cuda_event_inference.

Parameters
  • input_data – Input DataBuffer

  • output_buffer – Output DataBuffer, is populated with inferred results

Returns

InferStatus

void populate_model_details()

Populate class parameters with model details and values.

void print_model_details()

Print model details.

int set_holoscan_inf_onnx_session_options()

Create session options for inference.

virtual std::vector<std::vector<int64_t>> get_input_dims() const

Get input data dimensions to the model.

Returns

Vector of input dimensions. Each dimension is a vector of int64_t corresponding to the shape of the input tensor.

virtual std::vector<std::vector<int64_t>> get_output_dims() const

Get output data dimensions from the model.

Returns

Vector of input dimensions. Each dimension is a vector of int64_t corresponding to the shape of the input tensor.

virtual std::vector<holoinfer_datatype> get_input_datatype() const

Get input data types from the model.

Returns

Vector of values as datatype per input tensor

virtual std::vector<holoinfer_datatype> get_output_datatype() const

Get output data types from the model.

Returns

Vector of values as datatype per output tensor

virtual void cleanup()

Previous Class ManagerProcessor
Next Class Params
© Copyright 2022-2025, NVIDIA. Last updated on May 29, 2025.