Class OnnxInfer

Base Type

class OnnxInfer : public holoscan::inference::InferBase

Onnxruntime based inference class

Public Functions

OnnxInfer(const std::string &model_file_path, bool cuda_flag)

Constructor.

Parameters
  • model_file_path – Path to onnx model file

  • cuda_flag – Flag to show if inference will happen using CUDA

virtual InferStatus do_inference(std::shared_ptr<DataBuffer> &input_data, std::shared_ptr<DataBuffer> &output_buffer)

Does the Core inference using Onnxruntime. Input and output buffer are supported on Host. Inference is supported on host and device.

Parameters
  • input_data – Input DataBuffer

  • output_buffer – Output DataBuffer, is populated with inferred results

Returns

InferStatus

void populate_model_details()

Populate class parameters with model details and values.

void print_model_details()

Print model details.

int set_holoscan_inf_onnx_session_options()

Create session options for inference.

virtual std::vector<int64_t> get_input_dims() const

Get input data dimensions to the model.

Returns

Vector of values as dimension

virtual std::vector<int64_t> get_output_dims() const

Get output data dimensions from the model.

Returns

Vector of values as dimension

inline virtual void cleanup()

© Copyright 2022, NVIDIA. Last updated on Apr 27, 2023.