Deploying Inference Graphs to Kubernetes (dynamo deploy)#

This guide explains the deployment options available for Dynamo inference graphs in Kubernetes environments.

Deployment Options#

Dynamo provides two distinct deployment options that each serve different use cases:

  1. Dynamo Cloud Kubernetes Platform is preferred in cases that support it

  2. Manual Deployment with Helm Charts is suited to users who need more control over their deployments

Dynamo Cloud Kubernetes Platform [PREFERRED]#

The Dynamo Cloud Platform (deploy/cloud/) provides a managed deployment experience:

  • Contains the infrastructure components required for the Dynamo cloud platform

  • Used when deploying with the dynamo deploy CLI commands

  • Provides a managed deployment experience

For detailed instructions on using the Dynamo Cloud Platform, see:

Manual Deployment with Helm Charts#

Users who need more control over their deployments can use the manual deployment path (deploy/helm/):

  • Used for manually deploying inference graphs to Kubernetes

  • Contains Helm charts and configurations for deploying individual inference pipelines

  • Provides full control over deployment parameters

  • Requires manual management of infrastructure components

  • Documentation:

Getting Started with Helm Deploym#

  1. For Dynamo Cloud Platform:

  2. For Manual Deployment:

Example Deployments#

See the Hello World example for a complete walkthrough of deploying a simple inference graph.

See the LLM example for a complete walkthrough of deploying a production-ready LLM inference pipeline to Kubernetes.