Struct MultiInferenceMessageInterfaceProxy

struct MultiInferenceMessageInterfaceProxy

Interface proxy, used to insulate python bindings.

Public Static Functions

static std::shared_ptr<MultiInferenceMessage> init(std::shared_ptr<MessageMeta> meta, cudf::size_type mess_offset, cudf::size_type mess_count, std::shared_ptr<InferenceMemory> memory, cudf::size_type offset, cudf::size_type count)

Create and initialize a MultiInferenceMessage object, and return a shared pointer to the result.

Parameters
  • meta – Holds a data table, in practice a cudf DataFrame, with the ability to return both Python and C++ representations of the table

  • mess_offset – Offset into the metadata batch

  • mess_count – Messages count

  • memory – Holds the generic tensor data in cupy arrays that will be used for inference stages

  • offset – Message offset in inference memory instance

  • count – Message count in inference memory instance

Returns

std::shared_ptr<MultiInferenceMessage>

static std::shared_ptr<morpheus::InferenceMemory> memory(MultiInferenceMessage &self)

Get inference memory object shared pointer.

Parameters

self

Returns

std::shared_ptr<morpheus::InferenceMemory>

static std::size_t offset(MultiInferenceMessage &self)

Get message offset.

Parameters

self

Returns

std::size_t

static std::size_t count(MultiInferenceMessage &self)

Get messages count.

Parameters

self

Returns

std::size_t

static pybind11::object get_input(MultiInferenceMessage &self, const std::string &name)

Get ‘input_id’ tensor as a python object, throws a std::runtime_error if it does not exist.

Parameters
  • self

  • name

Returns

pybind11::object

static std::shared_ptr<MultiInferenceMessage> get_slice(MultiInferenceMessage &self, std::size_t start, std::size_t stop)

Get the shared pointer of a sliced batches based on offsets supplied. Automatically calculates the correct mess_offset and mess_count

Parameters
  • self

  • start – : Start offset address

  • stop – : Stop offset address

Returns

std::shared_ptr<MultiInferenceMessage>

© Copyright 2023, NVIDIA. Last updated on Feb 3, 2023.