NVIDIA Holoscan SDK v0.5.1
Holoscan v0.5.1

Class TrtInfer

Base Type

class TrtInfer : public holoscan::inference::InferBase

Class to execute TensorRT based inference

Public Functions

TrtInfer(const std::string &model_path, const std::string &model_name, bool enable_fp16, bool is_engine_path, bool cuda_buf_in, bool cuda_buf_out)

Constructor.

~TrtInfer()

Destructor.

virtual InferStatus do_inference(std::shared_ptr<DataBuffer> &input_data, std::shared_ptr<DataBuffer> &output_buffer)

Does the Core inference with TRT backend.

Parameters
  • input_data – Input DataBuffer

  • output_buffer – Output DataBuffer, is populated with inferred results

Returns

InferStatus

virtual std::vector<int64_t> get_input_dims() const

Get input data dimensions to the model.

Returns

Vector of values as dimension

virtual std::vector<int64_t> get_output_dims() const

Get output data dimensions from the model.

Returns

Vector of values as dimension

inline virtual void cleanup()

© Copyright 2022-2023, NVIDIA. Last updated on Jul 28, 2023.