Glossary of Common Morpheus Terms

Docker container published on NGC, allowing the deployment of models in MLflow to Triton Inference Server. This is also available as a Helm Chart.

A Morpheus module is a type of work unit that can be utilized in the Morpheus stage and can be registered to a MRC segment module registry. Modules are beneficial when there is a possibility for the work-unit to be reused.

A Helm Chart for deploying the infrastructure of Morpheus. It includes the Triton Inference Server, Kafka, and Zookeeper. Refer to https://catalog.ngc.nvidia.com/orgs/nvidia/teams/morpheus/helm-charts/morpheus-ai-engine.

Another name for the Morpheus SDK CLI Helm Chart.

Morpheus Runtime Core (MRC). Pipelines in MRC are low level representations of Morpheus pipelines.

NVIDIA GPU Cloud is the official location for Morpheus and many other NVIDIA Docker containers.

An individual node in the MRC pipeline. In Morpheus, MRC nodes are constructed by stages.

Refers to small-reusable MRC nodes contained in the mrc.core.operators Python module which perform common tasks such as:

  • filter

  • flatten

  • map

  • on_completed

  • pairwise

  • to_list

Represents all work to be performed end-to-end in Morpheus. A Morpheus pipeline consists of one or more segments, and each segment consists of one or more stages. At build time, a Morpheus pipeline is transformed into a MRC pipeline which is then executed.

MRC is built on top of RxCpp which is an open source C++ implementation of the ReactiveX API. In general, Morpheus users are only exposed to this when they wish to write a stage in C++.

A subgraph of a pipeline. Segments allow for both logical grouping, and distribution across multiple processes and execution hosts.

Fundamental building block in Morpheus representing a unit of work. Stages may consist of a single MRC node, a small collection of nodes, or an entire MRC subgraph. A stage can encapsulate any piece of functionality and is capable of integrating with any service or external library. Refer to Simple Python Stage.

Triton Inference Server, part of the NVIDIA AI platform, streamlines and standardizes AI inference by enabling teams to deploy, run, and scale trained AI models from any framework on any GPU- or CPU-based infrastructure. Most Morpheus pipelines utilize Triton for inferencing via the TritonInferenceStage. Refer to https://developer.nvidia.com/nvidia-triton-inference-server

Previous REST to DataFrame Loader
Next Performance
© Copyright 2023, NVIDIA. Last updated on Feb 2, 2024.