NVIDIA Morpheus (24.10.01)

morpheus.stages.inference.inference_stage.InferenceWorker

class InferenceWorker(inf_queue)[source]

Bases: object

Base class for providing implementation details for an inference stage. Create inference worker by subclassing this and implementing the required abstract methods. Inference stage class can then be assigned this worker by implementing _get_inference_worker to return your subclass.

Parameters
inf_queuemorpheus.utils.producer_consumer_queue.ProducerConsumerQueue

Inference queue.

Methods

build_output_message(msg) Create initial inference response message with result values initialized to zero.
calc_output_dims(msg) Calculates the dimensions of the inference output message data given an input message.
init() By overriding this function, the resources necessary for the inference can be initiated.
process(batch, callback) Main inference processing function.
stop() Override this function to stop the inference workers or carry out any additional cleanups.
build_output_message(msg)[source]

Create initial inference response message with result values initialized to zero. Results will be set in message as each inference mini-batch is processed.

Parameters
msgmorpheus.messages.ControlMessage

Batch of ControlMessage.

Returns
morpheus.messages.ControlMessage

Response message with probabilities calculated from inference results.

calc_output_dims(msg)[source]

Calculates the dimensions of the inference output message data given an input message.

Parameters
msgmorpheus.messages.ControlMessage

Pipeline inference input batch before splitting into smaller inference batches.

Returns
tuple

Output dimensions of response.

init()[source]

By overriding this function, the resources necessary for the inference can be initiated. Each inference worker calls this function once.

process(batch, callback)[source]

Main inference processing function. This function will be called once for each mini-batch. Once the inference is complete, the cb parameter should be used to set the response value. The callback can be called asynchronously.

Parameters
batchmorpheus.messages.ControlMessage

Mini-batch of inference messages.

callbacktyping.Callable[[morpheus.pipeline.messages.TensorMemory], None]

Callback to set the values for the inference response.

stop()[source]

Override this function to stop the inference workers or carry out any additional cleanups.

Previous morpheus.stages.inference.inference_stage.InferenceStage
Next morpheus.stages.inference.pytorch_inference_stage
© Copyright 2024, NVIDIA. Last updated on Dec 3, 2024.