Deploying Inference Graphs to Kubernetes (dynamo deploy
)#
This guide explains the deployment options available for Dynamo inference graphs in Kubernetes environments.
Deployment Options#
Dynamo provides two distinct deployment options that each serve different use cases:
Dynamo Cloud Kubernetes Platform is preferred in cases that support it
Manual Deployment with Helm Charts is suited to users who need more control over their deployments
Dynamo Cloud Kubernetes Platform [PREFERRED]#
The Dynamo Cloud Platform (deploy/cloud/
) provides a managed deployment experience:
Contains the infrastructure components required for the Dynamo cloud platform
Used when deploying with the
dynamo deploy
CLI commandsProvides a managed deployment experience
For detailed instructions on using the Dynamo Cloud Platform, see:
Dynamo Cloud Platform Guide: walks through installing and configuring the Dynamo cloud components on your Kubernetes cluster.
Manual Deployment with Helm Charts#
Users who need more control over their deployments can use the manual deployment path (deploy/helm/
):
Used for manually deploying inference graphs to Kubernetes
Contains Helm charts and configurations for deploying individual inference pipelines
Provides full control over deployment parameters
Requires manual management of infrastructure components
Documentation:
Using the Deployment Script: all-in-one script for manual deployment
Helm Deployment Guide: detailed instructions for manual deployment
Getting Started with Helm Deploym#
For Dynamo Cloud Platform:
Follow the Dynamo Cloud Platform Guide
Deploy a Hello World pipeline using the Operator Deployment Guide
Deploy a Dynamo LLM pipeline to Kubernetes Deploy LLM Guide
For Manual Deployment:
Follow the Manual Helm Deployment Guide
Example Deployments#
See the Hello World example for a complete walkthrough of deploying a simple inference graph.
See the LLM example for a complete walkthrough of deploying a production-ready LLM inference pipeline to Kubernetes.