NVIDIA Holoscan SDK v2.7.0
Holoscan v2.7.0

Class InferBase

Derived Types

class InferBase

Base Inference Class.

Subclassed by holoscan::inference::OnnxInfer, holoscan::inference::TorchInfer, holoscan::inference::TrtInfer

Public Functions

virtual ~InferBase() = default

Default destructor.

inline virtual InferStatus do_inference(const std::vector<std::shared_ptr<DataBuffer>> &input_data, std::vector<std::shared_ptr<DataBuffer>> &output_buffer, cudaEvent_t cuda_event_data, cudaEvent_t *cuda_event_inference)

Does the Core inference The provided CUDA data event is used to prepare the input data any execution of CUDA work should be in sync with this event. If the inference is using CUDA it should record a CUDA event and pass it back in cuda_event_inference.

Parameters
  • input_data – Input DataBuffer

  • output_buffer – Output DataBuffer, is populated with inferred results

  • cuda_event_data – CUDA event recorded after data transfer

  • cuda_event_inference – CUDA event recorded after inference

Returns

InferStatus

inline virtual std::vector<std::vector<int64_t>> get_input_dims() const

Get input data dimensions to the model.

Returns

Vector of values as dimension

inline virtual std::vector<std::vector<int64_t>> get_output_dims() const

Get output data dimensions from the model.

Returns

Vector of input dimensions. Each dimension is a vector of int64_t corresponding to the shape of the input tensor.

inline virtual std::vector<holoinfer_datatype> get_input_datatype() const

Get input data types from the model.

Returns

Vector of input dimensions. Each dimension is a vector of int64_t corresponding to the shape of the input tensor.

inline virtual std::vector<holoinfer_datatype> get_output_datatype() const

Get output data types from the model.

Returns

Vector of values as datatype per output tensor

inline virtual void cleanup()

Previous Class HostBuffer
Next Class Logger
© Copyright 2022-2024, NVIDIA. Last updated on Dec 2, 2024.