- class TritonInferenceAE(inf_queue, c, model_name, server_url, force_convert_inputs=False, use_shared_memory=False, inout_mapping=None)[source]
Bases:
morpheus.stages.inference.triton_inference_stage._TritonInferenceWorker
This class extends
TritonInference
to deal with inference processing specific to the AutoEncoder.- Parameters
- inf_queue<bsp-code-inline code="<a href="morpheus.utils.producer_consumer_queue.ProducerConsumerQueue.html#morpheus.utils.producer_consumer_queue.ProducerConsumerQueue">morpheus.utils.producer_consumer_queue.ProducerConsumerQueue</a>"><a href="morpheus.utils.producer_consumer_queue.ProducerConsumerQueue.html#morpheus.utils.producer_consumer_queue.ProducerConsumerQueue">morpheus.utils.producer_consumer_queue.ProducerConsumerQueue</a></bsp-code-inline>
Inference queue.
- c<bsp-code-inline code="<a href="morpheus.config.Config.html#morpheus.config.Config">morpheus.config.Config</a>"><a href="morpheus.config.Config.html#morpheus.config.Config">morpheus.config.Config</a></bsp-code-inline>
Pipeline configuration instance.
- model_namestr
Name of the model specifies which model can handle the inference requests that are sent to Triton inference server.
- server_urlstr
Triton server gRPC URL including the port.
- force_convert_inputsbool, default = False
Whether or not to convert the inputs to the type specified by Triton. This will happen automatically if no data would be lost in the conversion (i.e., float -> double). Set this to True to convert the input even if data would be lost (i.e., double -> float).
- use_shared_memory: bool, default = False
Whether or not to use CUDA Shared IPC Memory for transferring data to Triton. Using CUDA IPC reduces network transfer time but requires that Morpheus and Triton are located on the same machine.
- inout_mappingtyping.Dict[str, str]
Dictionary used to map pipeline input/output names to Triton input/output names. Use this if the Morpheus names do not match the model.
Methods
Create initial inference response message with result values initialized to zero.
Calculates the dimensions of the inference output message data given an input message.
init
()This function instantiate triton client and memory allocation for inference input and output.
process
(batch, cb)This function sends batch of events as a requests to Triton inference server using triton client API.
stop
()Override this function to stop the inference workers or carry out any additional cleanups.
default_inout_mapping
needs_logits
supports_cpp_node
- build_output_message(x)[source]
Create initial inference response message with result values initialized to zero. Results will be set in message as each inference mini-batch is processed.
- Parameters
- x<bsp-code-inline code="morpheus.pipeline.messages.MultiInferenceMessage">morpheus.pipeline.messages.MultiInferenceMessage</bsp-code-inline>
Batch of inference messages.
- Returns
morpheus.pipeline.messages.MultiResponseProbsMessage
Response message with probabilities calculated from inference results.
- calc_output_dims(x)[source]
Calculates the dimensions of the inference output message data given an input message.
- Parameters
- x<bsp-code-inline code="morpheus.pipeline.messages.MultiInferenceMessage">morpheus.pipeline.messages.MultiInferenceMessage</bsp-code-inline>
Pipeline inference input batch before splitting into smaller inference batches.
- Returns
- typing.Tuple
Output dimensions of response.
- init()[source]
This function instantiate triton client and memory allocation for inference input and output.
- process(batch, cb)[source]
This function sends batch of events as a requests to Triton inference server using triton client API.
- Parameters
- batch<bsp-code-inline code="morpheus.pipeline.messages.MultiInferenceMessage">morpheus.pipeline.messages.MultiInferenceMessage</bsp-code-inline>
Mini-batch of inference messages.
- cbtyping.Callable[[<bsp-code-inline code="morpheus.pipeline.messages.ResponseMemory">morpheus.pipeline.messages.ResponseMemory</bsp-code-inline>], None]
Callback to set the values for the inference response.
- stop()[source]
Override this function to stop the inference workers or carry out any additional cleanups.