Glossary of Common Morpheus Terms
A Morpheus module is a type of work unit that can be utilized in the Morpheus stage and can be registered to a MRC segment module registry. Modules are beneficial when there is a possibility for the work-unit to be reused.
A Helm Chart for deploying the infrastructure of Morpheus. It includes the Triton Inference Server, Kafka, and Zookeeper. Refer to https://catalog.ngc.nvidia.com/orgs/nvidia/teams/morpheus/helm-charts/morpheus-ai-engine.
A Helm Chart that deploys the Morpheus container. Refer to https://catalog.ngc.nvidia.com/orgs/nvidia/teams/morpheus/helm-charts/morpheus-sdk-client
Refers to small-reusable MRC nodes contained in the
mrc.core.operators Python module which perform common tasks such as:
A subgraph of a pipeline. Segments allow for both logical grouping, and distribution across multiple processes and execution hosts.
Fundamental building block in Morpheus representing a unit of work. Stages may consist of a single MRC node, a small collection of nodes, or an entire MRC subgraph. A stage can encapsulate any piece of functionality and is capable of integrating with any service or external library. Refer to Simple Python Stage.
Triton Inference Server, part of the NVIDIA AI platform, streamlines and standardizes AI inference by enabling teams to deploy, run, and scale trained AI models from any framework on any GPU- or CPU-based infrastructure. Most Morpheus pipelines utilize Triton for inferencing via the
TritonInferenceStage. Refer to https://developer.nvidia.com/nvidia-triton-inference-server