morpheus.stages.inference.triton_inference_stage

(Latest Version)

Classes

InputWrapper(client, model_name, config) This class is a wrapper around a CUDA shared memory object shared between this process and a Triton server instance.
ResourcePool(create_fn[, max_size]) This class provides a bounded pool of resources.
ShmInputWrapper(client, model_name, config) This class is a wrapper around a CUDA shared memory object shared between this process and a Triton server instance.
TritonInOut(name, bytes, datatype, shape, ...) Data class for model input and output configuration.
TritonInferenceStage(c, model_name, server_url) Perform inference with Triton Inference Server.
TritonInferenceWorker(inf_queue, c, ...[, ...]) Inference worker class for all Triton inference server requests.
Previous morpheus.stages.inference.pytorch_inference_stage.PyTorchInferenceStage
Next morpheus.stages.inference.triton_inference_stage.InputWrapper
© Copyright 2024, NVIDIA. Last updated on Jul 8, 2024.