# Morpheus AI Engine

The Morpheus AI Engine consists of the following components:

• Triton Inference Server [ ai-engine ] from NVIDIA for processing inference requests.

• Kafka Broker [ broker ] to consume and publish messages.

• Zookeeper [ zookeeper ] to maintain coordination between the Kafka Brokers.

NVIDIA Triton Inference Server is a very powerful tool meant to speed up MLOps workflows by simplifying how data scientists move their models into production. By supporting multiple frameworks (TesorFlow, TesorRT, PyTorch, MXNet, Python, ONNX, RAPIDS FIL, OpenVino, C++, and more) Triton can be used in almost any machine-learned application as an enterprise-grade abstraction layer between the application, the infrastructure, and the model. This allows data scientists to publish their work anywhere to perform high-performance inferencing.

Within your Morpheus Launchpad sandbox, Triton is used to provide inferencing capabilities to Morpheus. Morpheus can do inferencing locally with no additional tools, but by implementing Triton, the inferencing model can be easily updated independently of the Morpheus pipeline that is consuming the model. This provides greater flexibility in operations.

The examples provided in your Morpheus sandbox, utilize Triton and handle model publication and inferencing.

Apache Kafka has become one of the leading open-source cloud-native messaging protocols because it is easy to set up and maintain while also sporting very impressive performance and scalability. Kafka is now the de facto method to add streaming data to any application.

Within your Morpheus sandbox, Kafka is provided to stream simulated network data for scanning with Morpheus in simulated real-time. We have provided examples that make use of Kafka, this includes code for producing data into a Kafka topic as well as consuming data out of a Kafka topic. There are also examples of configuring Morpheus to consume network data and then produce the inference results. See the Kafka examples for more details.