Struct MultiAISpecs
Defined in File holoinfer_buffer.hpp
-
struct MultiAISpecs
Struct that holds specifications related to Multi AI inference, along with input and output data buffer.
Public Functions
- MultiAISpecs() = default
-
inline MultiAISpecs(const std::string &backend, const Mappings &model_path_map, const Mappings &inference_map, bool is_engine_path, bool oncpu, bool parallel_proc, bool use_fp16, bool cuda_buffer_in, bool cuda_buffer_out)
Constructor.
- Parameters
backend – Backend inference (trt or onnxrt)
model_path_map – Map with model name as key, path to model as value
inference_map – Map with model name as key, output tensor name as value
oncpu – Perform inference on CPU
parallel_proc – Perform parallel inference of multiple models
use_fp16 – Use FP16 conversion, only supported for trt
cuda_buffer_in – Input buffers on CUDA
cuda_buffer_out – Output buffers on CUDA
-
inline Mappings get_path_map() const
Get the model data path map.
- Returns
Mappings data
Public Members
- std::string backend_type_ = {"trt"}
- Mappings model_path_map_
Map with key as model name and value as model file path.
- Mappings inference_map_
Map with key as model name and value as inferred tensor name.
- bool is_engine_path_ = false
Flag showing if input model path is path to engine files.
- bool oncuda_ = true
Flag showing if inference on CUDA. Default is True.
- bool parallel_processing_ = false
Flag to enable parallel inference. Default is True.
- bool use_fp16_ = false
Flag showing if trt engine file conversion will use FP16. Default is False.
- bool cuda_buffer_in_ = true
Flag showing if input buffers are on CUDA. Default is True.
- bool cuda_buffer_out_ = true
Flag showing if output buffers are on CUDA. Default is True.
- DataMap data_per_model_
Input Data Map with key as model name and value as DataBuffer.
- DataMap data_per_tensor_
Input Data Map with key as tensor name and value as DataBuffer.
- DataMap output_per_model_
Output Data Map with key as tensor name and value as DataBuffer.