Image Mirroring
All required self-hosted NVCF artifacts (see self-hosted-artifact-manifest) must be available to be pulled by pods in your Kubernetes cluster for a successful installation using the helmfile (nvcf-self-managed-stack) automation. This page provides examples on how to pull artifacts from NGC and push them to your desired registry.
Mirroring images is not the same as configuring image pull secrets. This page covers how to copy NVCF artifacts into your registry. If your registry is private, Kubernetes also needs credentials to pull those images at runtime. For instructions on configuring image pull secrets for the NVCF control plane pods, see control-plane-image-pull-secrets in the installation guide.
Recommended for ECR Users: Automated ECR Mirroring
If you are deploying to Amazon EKS with ECR, the nvcf-base Terraform module provides automated image mirroring as the recommended approach. This eliminates manual mirroring steps entirely.
To enable automated mirroring, set the following in your terraform.tfvars:
When create_sm_ecr_repos = true, Terraform will:
- Create all required ECR repositories under
{cluster_name}/prefix - Mirror all NVCF control plane images and Helm charts from NGC
- Mirror LLS artifacts (streaming-proxy, gdn-streaming Helm chart)
- Use the correct architecture for your cluster (linux/amd64)
What’s included:
- Infrastructure components (NATS, Cassandra, OpenBao)
- Control plane components (API, SIS, gRPC proxy, invocation service, etc.)
- GPU workload components (NVCA operator, worker utilities)
- LLS components (always included)
- Reference architecture components (gateway routes, admin-issuer-proxy)
What’s NOT included by default:
- Simulation caching components (gxcache, ddcs, usd-content-cache) — uncomment in the copy script if needed
- Custom streaming application images (e.g., usd-composer) — mirror manually
Automated mirroring requires:
- AWS credentials configured with ECR push permissions. Verify with
aws sts get-caller-identity. See aws-authentication for configuration options. - NGC_API_KEY environment variable set with an API key from the
nvcf-onpremorganization before runningterraform apply. - skopeo installed on your machine. Skopeo is used to copy container images directly between registries without requiring a local Docker daemon. See the skopeo installation guide for installation instructions.
For detailed Terraform configuration, see terraform-installation.
If you cannot use the Terraform automation (e.g., non-ECR registry, air-gapped environment), continue with the manual mirroring instructions below.
Prerequisites
You must have access to the NGC nvcf-onprem organization to begin.
- Navigate to https://org.ngc.nvidia.com/setup/api-keys and ensure you have selected the
nvcf-onpremorganization in the upper right. - Create a Personal API key with the required scopes to pull entities.

- Set the NGC API key as an environment variable for use in any subsequent commands:
LLS-Specific Artifacts
If you plan to deploy Low Latency Streaming (LLS), you must mirror the following additional artifacts beyond the core NVCF control plane:
Container Images:
streaming-proxy- Streaming Proxy container for streaming
Helm Charts:
gdn-streaming- GDN Streaming Proxy Helm chart
Optional (for streaming workloads):
- Streaming application images (e.g.,
usd-composer)
These artifacts (aside from the streaming application sample) are automatically included when using Terraform automated mirroring (create_sm_ecr_repos = true). LLS artifacts are always mirrored regardless of whether lls_enabled is set.
See self-hosted-lls-installation for LLS deployment instructions.
Pulling Artifacts from NGC
Important: The examples below show how to pull individual artifacts. You must pull each image, chart, and resource listed in the self-hosted-artifact-manifest individually. These examples demonstrate the process for one artifact of each type - you will need to repeat these steps for every artifact required for your deployment.
Complete the following for each artifact:
- Pull each container image from NGC
- Pull each Helm chart from NGC
- Pull each resource (like
nvcf-base,nvcf-self-managed-stack) from NGC - Push each artifact to your target registry (ECR, Harbor, etc.)
See the self-hosted-artifact-manifest for the complete list of all required artifacts.
Pulling Images
Platform Architecture Mismatch
When pulling images, Docker pulls the architecture matching your local machine by default. If you’re running on an Apple Silicon Mac (arm64) but deploying to an amd64 cluster (most EKS/GKE clusters), you must specify the target platform:
Failing to specify the correct platform will result in exec format error when pods attempt to start. See image-mirroring-troubleshooting for more details.
-
Login using the Personal API key you have generated in the previous step:
-
Pull the image (specify platform matching your target cluster):
Pulling Helm Charts
OCI-compliant Helm Charts
Repository-based Helm Charts (Non-OCI)
Some charts like the GPU Operator and the Omniverse DDCS, UCC, storage-service, and discovery-service charts are available from traditional Helm repositories rather than OCI registries. These can be pulled directly from the public NVIDIA NGC Catalog.
The GPU Operator and related components (gpu-operator-validator, k8s-device-plugin), plus the Omniverse DDCS/UCC/storage charts, are available from public NVIDIA Helm repositories. You can either:
- Pull directly from the public repository at runtime (simplest approach)
- Mirror to your private registry for air-gapped environments (see below)
Converting Non-OCI Charts for ECR
To push repository-based Helm charts to Amazon ECR (which requires OCI format), you must convert them:
ECR will properly track both container images and Helm charts under the same repository name and version, so you can use consistent naming for both. The repository prefix (e.g., nvcf-self-hosted) must match your global.image.repository environment configuration.
Pulling Resources from NGC
Using NGC CLI
First, ensure you have the NGC CLI installed and configured using the Personal API key you created.
Downloading nvcf-base
The nvcf-base repository contains Terraform configurations and core application deployments for self-hosted NVCF infrastructure.
Check for the latest version before downloading. The version shown below is an example only.
Download and extract:
If you don’t have access to this repository, contact your NVIDIA representative.
Downloading nvcf-self-managed-stack
The nvcf-self-managed-stack repository contains Helmfile configurations for deploying the NVCF control plane components.
Check for the latest version before downloading. The version shown below is an example only.
Download and extract:
If you don’t have access to this repository, contact your NVIDIA representative.
Downloading nvcf-cli
The nvcf-cli is a command-line interface for managing NVIDIA Cloud Functions in self-hosted deployments.
Check for the latest version before downloading. The version shown below is an example only.
Download and extract:
The extracted directory contains:
nvcf-cli- The CLI binary.nvcf-cli.yaml.template- Configuration templateexamples/- Sample configuration files for different environmentsUSAGE-GUIDE.md- Detailed usage documentation
See self-hosted-cli for detailed configuration instructions
If you don’t have access to this repository, contact your NVIDIA representative.
Pushing to Your Registry
Ensure all artifacts listed in the self-hosted-artifact-manifest are mirrored to your registry before beginning the installation process.
Example: Pushing to Amazon ECR
This example assumes you’re configured and authenticated using the AWS CLI.
Identify Your AWS Account ID
The examples below use <aws-account-id> as a placeholder. To get your AWS account ID, run:
ECR Repository Naming Convention
The Helm templates expect images at: {{ registry }}/{{ repository }}/image-name:tag
For example, with environment configuration:
The resulting image path would be: <aws-account-id>.dkr.ecr.us-east-1.amazonaws.com/nvcf-self-hosted/nvcf-openbao:2.5.1-nv-1.1.0
In ECR, you must create repositories with the full path including the prefix, e.g., nvcf-self-hosted/bitnami-cassandra, nvcf-self-hosted/nvcf-openbao, etc.
Initial Setup
Push an Image to ECR
Push a Helm Chart to ECR
Replace <aws-account-id> with your AWS account ID (run aws sts get-caller-identity --query Account --output text). The REPO_PREFIX value must match your global.image.repository setting in your environment config. Adjust the region as needed.
Example: Pushing to Volcano Engine Container Registry
This example shows how to push images and Helm charts to Volcano Engine Container Registry (CR) using the web console, Docker commands and Helm commands.
Volcano Engine CR Repository Naming Convention
The Helm templates expect images at: {{ registry }}/{{ repository }}/image-name:tag
For example, with environment configuration:
The resulting image path would be: cr-example-cn-beijing.cr.volces.com/nvcf-self-hosted/nvcf-openbao:2.5.1-nv-1.1.0
Docker Authentication
Replace cr-example-cn-beijing.cr.volces.com with your Volcano Engine CR endpoint, your-username with your username, and your-password with your password.
Navigate to your Volcano Engine Container Registry instance web console to get the username and password. If you haven’t set the password, you can set it by clicking “Set Repository Instance Password”.
Push an Image to Volcano Engine CR
Push a Helm Chart to Volcano Engine CR
Troubleshooting
exec format error
Symptom: Pods fail to start with Init:CrashLoopBackOff or CrashLoopBackOff status. Checking the logs shows:
or
Cause: This error occurs when container images were pulled/pushed with an architecture that doesn’t match your cluster’s node architecture. This commonly happens when:
- Mirroring from an Apple Silicon Mac (arm64) to an amd64 EKS/GKE cluster
- Mirroring from an Intel/AMD machine (amd64) to an arm64 cluster (e.g., AWS Graviton)
Solution:
-
Delete the incorrectly mirrored images from your registry (e.g., ECR):
-
Clean local Docker cache to ensure fresh pulls:
-
Re-mirror images with the correct platform:
When pulling images, explicitly specify the target platform:
Then re-tag and push to your registry.
-
Force Kubernetes to re-pull images by either:
- Setting
imagePullPolicy: Alwaystemporarily in your Helm values - Deleting and redeploying the affected StatefulSets/Deployments
- Setting