morpheus.stages.inference.triton_inference_stage.TritonInferenceFIL
- class TritonInferenceFIL(inf_queue, c, model_name, server_url, force_convert_inputs=False, use_shared_memory=False, inout_mapping=None)[source]
Bases:
morpheus.stages.inference.triton_inference_stage._TritonInferenceWorker
This class extends
TritonInference
to deal with scenario-specific FIL models inference requests like building response.- Parameters
- inf_queue
morpheus.utils.producer_consumer_queue.ProducerConsumerQueue
- c
morpheus.config.Config
- model_namestr
- server_urlstr
- force_convert_inputsbool, default = False
- use_shared_memory: bool, default = False
- inout_mappingtyping.Dict[str, str]
Inference queue.
Pipeline configuration instance.
Name of the model specifies which model can handle the inference requests that are sent to Triton inference server.
Triton server gRPC URL including the port.
Whether or not to convert the inputs to the type specified by Triton. This will happen automatically if no data would be lost in the conversion (i.e., float -> double). Set this to True to convert the input even if data would be lost (i.e., double -> float).
Whether or not to use CUDA Shared IPC Memory for transferring data to Triton. Using CUDA IPC reduces network transfer time but requires that Morpheus and Triton are located on the same machine.
Dictionary used to map pipeline input/output names to Triton input/output names. Use this if the Morpheus names do not match the model.
- inf_queue
Methods
Create initial inference response message with result values initialized to zero.
Calculates the dimensions of the inference output message data given an input message.
Returns default dictionary used to map FIL pipeline input/output names to Triton input/output names
init
()This function instantiate triton client and memory allocation for inference input and output.
process
(batch, cb)This function sends batch of events as a requests to Triton inference server using triton client API.
stop
()Override this function to stop the inference workers or carry out any additional cleanups.
needs_logits
supports_cpp_node
- build_output_message(x)[source]
Create initial inference response message with result values initialized to zero. Results will be set in message as each inference mini-batch is processed.
- Parameters
- x
morpheus.pipeline.messages.MultiInferenceMessage
Batch of inference messages.
- x
- Returns
morpheus.pipeline.messages.MultiResponseMessage
Response message with probabilities calculated from inference results.
- calc_output_dims(x)[source]
Calculates the dimensions of the inference output message data given an input message.
- Parameters
- x
morpheus.pipeline.messages.MultiInferenceMessage
Pipeline inference input batch before splitting into smaller inference batches.
- x
- Returns
- typing.Tuple
Output dimensions of response.
- classmethod default_inout_mapping()[source]
Returns default dictionary used to map FIL pipeline input/output names to Triton input/output names
- Returns
- default_inout_mappingtyping.Dict[str, str]
Dictionary with default input and output names.
- init()[source]
This function instantiate triton client and memory allocation for inference input and output.
- process(batch, cb)[source]
This function sends batch of events as a requests to Triton inference server using triton client API.
- Parameters
- batch
morpheus.pipeline.messages.MultiInferenceMessage
- cbtyping.Callable[[
morpheus.pipeline.messages.TensorMemory
], None]
Mini-batch of inference messages.
Callback to set the values for the inference response.
- batch
- stop()[source]
Override this function to stop the inference workers or carry out any additional cleanups.