Helmfile Installation

View as Markdown

This section covers the installation of the NVCF control plane components, which are required for all self-hosted NVCF deployments.

By default, the NVCF self-hosted stack is deployed using the provided Helmfile as described here. However, you can also install each Helm chart individually using helm install or helm upgrade (see self-hosted-standalone-deployment).

This guide assumes you have already downloaded and extracted the nvcf-self-managed-stack helmfile bundle (see download-nvcf-self-managed-stack). All commands in this guide are run from inside the extracted nvcf-self-managed-stack/ directory unless otherwise noted. The directory contains the helmfile definitions, environment templates, and sample configurations referenced throughout.

$cd path/to/nvcf-self-managed-stack
$ls
$# Expected contents: helmfile.d/ environments/ secrets/ global.yaml.gotmpl ...

Namespace Requirements

Each Helm chart in the NVCF stack must be installed into a specific namespace. These namespace assignments are fixed and must not be changed — service-to-service cluster DNS addressing and Vault (OpenBao) authentication claims depend on this layout.

NamespaceServices
nvcfapi, invocation-service, grpc-proxy, notary-service, reval, state-metrics
api-keysapi-keys, admin-issuer-proxy
essess-api
sissis
vault-systemopenbao-server
cassandra-systemcassandra
nats-systemnats
envoy-gateway-systemingress (nvcf-gateway-routes)

Installing a chart into the wrong namespace will cause authentication failures such as error validating claims: claim "/kubernetes.io/namespace" does not match any associated bound claim values. If you see this error, verify that every release is deployed in the namespace shown above.

Prerequisites

Required Tools and Software

The following tools must be installed on your deployment machine:

  • kubectl
  • helm >= 3.12
  • helmfile >= 1.1.0 (recommended: 1.1.x)
  • helm-diff plugin >=3.11

Avoid Helmfile 1.2.x. Helmfile 1.2.0 removed sequential execution mode, which the NVCF stack requires for ordered deployments. Use version 1.1.x for compatibility with the commands in this guide.

Helmfile 1.3.0+ re-introduced sequential execution via the --sequential-helmfiles flag, but the command syntax differs from the 1.1.x examples shown here. If you choose to use 1.3.0+, add --sequential-helmfiles to every helmfile apply and helmfile sync command.

  • A kubernetes cluster (CSP agnostic or on-prem).
  • Kubernetes Gateway CRDs installed (optional, required for Gateway API Ingress)
  • Artifacts must be available in a registry that your Kubernetes cluster can access. This can be the nvcf-onprem registry for NVCF control plane service artifacts, but function containers and helm charts must be configured to a user-managed registry. See self-hosted-artifact-manifest and self-hosted-image-mirroring.
  • The nvcf-self-managed-stack repository must be downloaded to your local machine (see download-nvcf-self-managed-stack).

See terraform-installation for instructions on how to deploy a Kubernetes cluster on EKS or other CSPs if you don’t have one already.

Install the Kubernetes Gateway API CRDs v1.2.0. Note if replacing the version (v1.2.0) with a different version, you may need to ensure compatability with the GatewayClass and Gateway resources created in Step 1.

$# Replace with desired version
$kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/experimental-install.yaml
$# Install helm-diff plugin (required for helmfile)
$helm plugin install https://github.com/databus23/helm-diff

kubectl version must match your cluster (within one minor version). Using a kubectl version that is more than one minor version ahead of your Kubernetes cluster will cause kubectl apply and kubectl patch commands to fail — not just warn — due to stricter server-side field validation in newer clients.

This is especially common on macOS with Homebrew, where brew install kubectl or brew upgrade can silently install a version much newer than your cluster. Verify before proceeding:

$kubectl version
$# Ensure the Client Version and Server Version are within one minor version of each other.
$# Example: Client v1.32.x against Server v1.31.x is OK.
$# Client v1.32.x against Server v1.29.x will cause failures.

If your client is too new, install a matching version directly from the Kubernetes release page.

Access Requirements

  • kubectl configured to the kubernetes cluster you are deploying to

  • Personal NGC API Key from ngc.nvidia.com authenticated with nvcf-onprem organization only if you pull artifacts directly from NGC or use NGC as your registry

  • Registry credentials for your container registry (ECR, NGC, etc.) - see third-party-registries-self-hosted for setup instructions

  • Local Helm/Docker authentication to your container registry where NVCF charts are stored. Helmfile pulls OCI charts during deployment, so your local environment must be authenticated. Examples:

    • AWS ECR: aws ecr get-login-password --region <region> | helm registry login --username AWS --password-stdin <account-id>.dkr.ecr.<region>.amazonaws.com
    • NGC: docker login nvcr.io -u '$oauthtoken' -p <NGC_API_KEY>
    • Other registries: Use docker login or helm registry login as appropriate for your registry

If you are using NGC as your registry, you will use your NGC API key when generating the base64 registry credential in Step 3. Exporting NGC_API_KEY is optional and only needed if you prefer to reuse it in commands.

Installation Steps

The installation flow is as follows.

  1. Prepare ingress configuration
  2. Configure your environment file (environments/<environment-name>.yaml)
  3. Configure your secrets file (secrets/<environment-name>-secrets.yaml)
  4. Configure image pull secrets (skip if using a CSP registry with built-in credential helpers)
  5. Deploy the NVCF control plane components
  6. Verify the installation

Step 1. Prepare ingress configuration

  1. First, create the required namespaces for NVCF components:
$kubectl create namespace envoy-gateway-system && \
>kubectl create namespace envoy-gateway && \
>kubectl create namespace api-keys && \
>kubectl create namespace ess && \
>kubectl create namespace sis && \
>kubectl create namespace nvcf
  1. Next, label the namespaces for NVCF platform identification:
$kubectl label namespace envoy-gateway nvcf/platform=true && \
>kubectl label namespace api-keys nvcf/platform=true && \
>kubectl label namespace sis nvcf/platform=true && \
>kubectl label namespace ess nvcf/platform=true && \
>kubectl label namespace nvcf nvcf/platform=true
  1. Install Envoy Gateway:
$helm install eg oci://docker.io/envoyproxy/gateway-helm \
> --version v1.1.3 \
> -n envoy-gateway-system
  1. Create the GatewayClass resource:
$kubectl apply -f - <<EOF
$apiVersion: gateway.networking.k8s.io/v1
$kind: GatewayClass
$metadata:
$ name: eg
$spec:
$ controllerName: gateway.envoyproxy.io/gatewayclass-controller
$EOF
  1. Create the Gateway resource:

The annotations section below is cloud-provider specific and controls how the external load balancer is provisioned. Choose the appropriate annotations for your environment:

  • AWS (EKS): Creates an internet-facing Network Load Balancer
  • GCP (GKE): Creates an external HTTP(S) load balancer
  • Azure (AKS): Creates a public load balancer
  • On-prem: Requires a load balancer solution like MetalLB, or use NodePort/Ingress. Consult your infrastructure documentation.
$kubectl apply -f - <<EOF
$apiVersion: gateway.networking.k8s.io/v1
$kind: Gateway
$metadata:
$ name: nvcf-gateway
$ namespace: envoy-gateway
$ annotations:
$ # --- AWS (EKS) ---
$ service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
$ service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
$ # --- GCP (GKE) - use these instead for GCP ---
$ # cloud.google.com/load-balancer-type: "External"
$ # --- Azure (AKS) - use these instead for Azure ---
$ # service.beta.kubernetes.io/azure-load-balancer-internal: "false"
$spec:
$ gatewayClassName: eg
$ listeners:
$ - name: http
$ protocol: HTTP
$ port: 80
$ allowedRoutes:
$ namespaces:
$ from: Selector
$ selector:
$ matchLabels:
$ nvcf/platform: "true"
$ - name: tcp
$ protocol: TCP
$ port: 10081
$ allowedRoutes:
$ namespaces:
$ from: Selector
$ selector:
$ matchLabels:
$ nvcf/platform: "true"
$EOF
  1. Verify the Gateway is ready:
$# Check Gateway status
$kubectl get gateway nvcf-gateway -n envoy-gateway
$
$# Wait for PROGRAMMED=True and ADDRESS to appear
$kubectl wait --for=condition=Programmed gateway/nvcf-gateway -n envoy-gateway --timeout=300s
$
$# Get the NLB address
$GATEWAY_ADDR=$(kubectl get gateway nvcf-gateway -n envoy-gateway -o jsonpath='{.status.addresses[0].value}')
$
$echo "$GATEWAY_ADDR"
$# e.g. abc123-4567890.us-west-2.elb.amazonaws.com
  1. Proceed to Step 2. Ensure you have your GATEWAY_ADDR ready to use in your environment configuration.

The Gateway address is embedded throughout your deployment. The domain value in your environment file, the Gateway API HTTPRoutes/TCPRoutes, and service discovery all depend on this address. If the Gateway or its underlying load balancer is deleted and recreated (e.g., due to a TCPRoute misconfiguration), a new address will be assigned.

If the address changes after deployment, you must update the domain in your environment file and re-sync the affected releases. See [Recovering from Gateway Address Changes] for the procedure.

The Gateway you created here will be used by the nvcf-gateway-routes chart to create HTTPRoutes and TCPRoutes for NVCF services. For details on how routing works, verification commands, and production DNS/HTTPS setup, see gateway-routing.

Step 2. Configure your environment file (environments/<environment-name>.yaml)

Environment configuration files define how NVCF is deployed in your specific environment. They are YAML files that provide values to the Helm charts.

Create your environment file from the template below (cp-env-eks-example.yaml).

$cd path/to/nvcf-self-managed-stack
$touch environments/<environment-name>.yaml
$# Copy the template into the file

The following example shows a typical configuration for Amazon EKS:

environments/eks-example.yaml
1global:
2
3 # Domain for external access (used by Gateway API HTTPRoutes)
4 domain: "GATEWAY_ADDR" # Replace with ELB domain
5
6 # =============================================================================
7 # Helm Chart Sources Configuration
8 # =============================================================================
9 # Configure the OCI registry where NVCF Helm charts are stored.
10 # This must point to a registry containing the NVCF chart packages.
11 # =============================================================================
12 helm:
13 sources:
14 registry: <your-account-id>.dkr.ecr.<your-region>.amazonaws.com
15 repository: <your-ecr-repository-name> # if using nvcf-base, this will match the cluster name set in the terraform configuration
16 # NGC Example:
17 # registry: nvcr.io
18 # repository: YOUR_ORG/YOUR_TEAM # e.g. 123456789102/YOUR_TEAM
19 # ECR Example:
20 # registry: <your-account-id>.dkr.ecr.<your-region>.amazonaws.com
21 # repository: <your-ecr-repository-name>
22
23 # =============================================================================
24 # Container Image Registry Configuration
25 # =============================================================================
26 # Configure the container registry where NVCF service images are stored.
27 # These images are pulled by Kubernetes when deploying the NVCF stack.
28 # =============================================================================
29 image:
30 registry: <your-account-id>.dkr.ecr.<your-region>.amazonaws.com
31 repository: <your-ecr-repository-name> # if using nvcf-base, this will match the cluster name set in the terraform configuration
32 # NGC Example:
33 # registry: nvcr.io
34 # repository: YOUR_ORG/YOUR_TEAM # e.g. 123456789102/YOUR_TEAM
35 # ECR Example:
36 # registry: <your-account-id>.dkr.ecr.<your-region>.amazonaws.com
37 # repository: <your-ecr-repository-name>
38
39 nodeSelectors:
40 enabled: true # If using nvcf-base to create EKS cluster, enabled: true
41 vault:
42 key: nvcf.nvidia.com/workload
43 value: vault
44 cassandra:
45 key: nvcf.nvidia.com/workload
46 value: cassandra
47 controlplane:
48 key: nvcf.nvidia.com/workload
49 value: control-plane
50
51 storageClass: "gp3" # Customize to your storage class, if using nvcf-base gp3
52 storageSize: "10Gi" # Customize to your storage size, if using nvcf-base 20Gi
53
54 # =============================================================================
55 # Observability Configuration
56 # =============================================================================
57 # Enable distributed tracing via OTLP (dsiabled by default).
58 # This must point to an OTLP-compatible collector.
59 # =============================================================================
60 observability:
61 tracing:
62 enabled: false
63 collectorEndpoint: ""
64 collectorPort: 4317
65 collectorProtocol: http
66 # Example:
67 # enabled: true
68 # collectorEndpoint: <your-collector-endpoint>
69 # collectorPort: <your-collector-port>
70 # collectorProtocol: <your-collector-protocol>
71
72fakeGpuOperator:
73 enabled: false # If deploying locally with no GPUs, true
74 ubuntu:
75 imageName: alpine-k8s
76 tag: 1.30.12
77
78accounts: # Default NVCF account configuration
79 limits:
80 maxFunctions: 10
81 maxTasks: 10 # Note: Tasks (NVCT) are not currently supported for EA
82 maxTelemetries: 10 # Note: BYOO is not currently supported for EA
83 maxRegistryCreds: 10
84
85# These static global values are processed in the values template
86nats:
87 enabled: true
88
89cassandra:
90 enabled: true
91
92openbao:
93 enabled: true
94 migrations:
95 issuerDiscovery:
96 enabled: true # Recommended true for EKS - discovers OIDC issuer automatically
97
98# Ingress Gateway Configuration
99ingress:
100 gatewayApi:
101 enabled: true
102 controllerNamespace: "envoy-gateway-system" # must be set by the environment
103 routes:
104 nvcfApi:
105 routeAnnotations: {}
106 apiKeys:
107 routeAnnotations: {}
108 invocation:
109 routeAnnotations: {}
110 grpc:
111 routeAnnotations: {}
112 gateways:
113 shared:
114 name: "nvcf-gateway" # must be set by the environment
115 namespace: "envoy-gateway" # must be set by the environment
116 listenerName: http
117 grpc:
118 name: "nvcf-gateway" # must be set by the environment
119 namespace: "envoy-gateway" # must be set by the environment
120 listenerName: tcp

domain and ingress Configuration

The domain and ingress sections of the environment file are used to configure the external access to the NVCF control plane.

If using the above example directly for EKS, you would replace the GATEWAY_ADDR with the actual ELB domain you obtained in Step 1.

1domain: "GATEWAY_ADDR" # Replace with ELB domain

If using the above example directly for EKS, your ingress configuration would look like this:

1ingress:
2 gatewayApi:
3 enabled: true
4 controllerNamespace: "envoy-gateway-system"
5 routes:
6 nvcfApi:
7 routeAnnotations: {}
8 apiKeys:
9 routeAnnotations: {}
10 invocation:
11 routeAnnotations: {}
12 grpc:
13 routeAnnotations: {}
14 gateways:
15 shared:
16 name: "nvcf-gateway"
17 namespace: "envoy-gateway"
18 listenerName: http
19 grpc:
20 name: "nvcf-gateway"
21 namespace: "envoy-gateway"
22 listenerName: tcp

nodeSelectors Configuration

The nodeSelectors section of the environment file is used to configure the nodes on which the NVCF control plane components are deployed. Disable this unless you have a cluster with node selectors pre-configured on node pools within your cluster.

If using nvcf-base to create your cluster, you would enable this section with the following configuration:

1nodeSelectors:
2 enabled: true
3 vault:
4 key: nvcf.nvidia.com/workload
5 value: vault
6 cassandra:
7 key: nvcf.nvidia.com/workload
8 value: cassandra
9 controlplane:
10 key: nvcf.nvidia.com/workload
11 value: control-plane

cassandra Resource Tuning

The default Cassandra resource limits may be insufficient for clusters with large instance types (e.g., p5.48xlarge), causing Cassandra pods to be OOM-killed during initialization. If you observe Cassandra pods restarting with OOMKilled status, increase the Cassandra resource requests and limits using a Helmfile release values override (see overriding-helm-chart-values).

Add a values block to the cassandra release in helmfile.d/01-dependencies.yaml.gotmpl:

1- name: cassandra
2 version: 0.9.0
3 condition: cassandra.enabled
4 namespace: cassandra-system
5 <<: *dependency
6 values:
7 - ../global.yaml.gotmpl
8 - ../secrets/{{ requiredEnv "HELMFILE_ENV" }}-secrets.yaml
9 - cassandra:
10 resources:
11 limits:
12 cpu: "8"
13 memory: 8192Mi
14 requests:
15 cpu: "2"
16 memory: 4096Mi

Then apply the change to just Cassandra:

$HELMFILE_ENV=<environment-name> helmfile --selector name=cassandra sync

When overriding values on a release that uses <<: *dependency, you must re-include global.yaml.gotmpl and the secrets file in your values list because YAML merge replaces lists entirely. Adjust CPU and memory values to suit your workload.

helm and image Configuration

The helm and image sections tell NVCF which registries to pull Helm charts and container images from.

  • helm.sources: The OCI registry where NVCF Helm charts are stored. Helmfile pulls charts from here at deploy time (requires local authentication — see [Access Requirements]).
  • image: The container registry where NVCF service images are stored. Kubernetes pulls images from here at runtime.
1# Helm Chart Sources Configuration
2helm:
3 sources:
4 registry: "nvcr.io"
5 repository: "YOUR_ORG/YOUR_TEAM"
6 # NGC Example:
7 # registry: nvcr.io
8 # repository: 123456789102/YOUR_TEAM
9 # ECR Example:
10 # registry: <your-account-id>.dkr.ecr.<your-region>.amazonaws.com
11 # repository: <your-ecr-repository-name>
12
13# Container Image Registry Configuration
14image:
15 registry: nvcr.io
16 repository: YOUR_ORG/YOUR_TEAM
17 # NGC Example:
18 # registry: nvcr.io
19 # repository: 123456789102/YOUR_TEAM
20 # ECR Example:
21 # registry: <your-account-id>.dkr.ecr.<your-region>.amazonaws.com
22 # repository: <your-ecr-repository-name>

If you have mirrored NVCF artifacts to your own registry (e.g., ECR), update both helm.sources and image to point to your mirror. See self-hosted-image-mirroring for details on mirroring artifacts.

When upgrading to a new nvcf-self-managed-stack version, you must re-mirror all artifacts before running helmfile sync. Each stack release may introduce new or updated container images and Helm charts. If these are not present in your private registry, pods will fail with ImagePullBackOff. Check the self-hosted-artifact-manifest for the complete list of required artifacts and versions.

Pulling directly from NGC is the recommended approach and avoids the need to manually mirror artifacts on every upgrade. If your environment permits it, configure helm.sources and image to point to the NGC registry (nvcr.io) and use your NGC API key for authentication. This ensures you always have access to the latest artifacts without additional mirroring steps.

These settings control where images are pulled from, not how Kubernetes authenticates to pull them. If your image registry is private, you may also need to configure image pull secrets — see Step 4.

Quick Start Summary: If you are using the example EKS environment YAML directly, used nvcf-base to create your cluster, and followed the ingress setup from Step 1, you only need to change:

  1. domain: Replace GATEWAY_ADDR with the load balancer address from Step 1
  2. helm.sources.registry and helm.sources.repository: Point to your Helm chart registry
  3. image.registry and image.repository: Point to your container image registry

Overriding Helm Chart Values

The environment file (environments/<environment-name>.yaml) controls global settings like domain, image, and nodeSelectors. However, you may need to override values for a specific Helm chart — for example, to increase Cassandra memory limits or change an image tag for one service.

Helmfile releases support a values property that passes values through to the underlying helm install/helm upgrade command. To add chart-specific overrides, edit the release definition in the appropriate file under helmfile.d/ and add a values block:

1# Example: helmfile.d/01-dependencies.yaml.gotmpl
2- name: cassandra
3 version: 0.9.0
4 condition: cassandra.enabled
5 namespace: cassandra-system
6 <<: *dependency
7 values:
8 - ../global.yaml.gotmpl
9 - ../secrets/{{ requiredEnv "HELMFILE_ENV" }}-secrets.yaml
10 - cassandra:
11 resources:
12 requests:
13 cpu: "2"
14 memory: 4096Mi
15 limits:
16 cpu: "8"
17 memory: 8192Mi

When a release inherits from a template (<<: *dependency), specifying values on the release replaces the template’s values list (YAML merge does not append lists). You must re-include global.yaml.gotmpl and the secrets file.

The values block is a list of YAML mappings. Keys correspond to the chart’s values.yaml structure. For example, to override a deeply nested value:

1values:
2 - api:
3 image:
4 tag: 2.223.9
5 env:
6 NVCF_REGISTRIES_ACCOUNT_PROVISIONING_ARTIFACT_TYPES: "CONTAINER,HELM"

Values defined here take the highest precedence, overriding both the environment file and global.yaml.gotmpl. Use helmfile template to preview the rendered manifests after adding overrides, then apply to a single release:

$# Preview changes
$HELMFILE_ENV=<environment-name> helmfile --selector name=cassandra template
$
$# Apply changes to just that release
$HELMFILE_ENV=<environment-name> helmfile --selector name=cassandra sync

Step 3. Configure your secrets file (secrets/<environment-name>-secrets.yaml)

Secrets configuration contains any sensitive data required for NVCF operation. The image pull secret credentials you insert here will be used to bootstrap the NVCF API with registry credentials for all worker components (function sidecars), function containers and helm charts.

These credentials will then be used for function deployments. Note that if the registry credentials are not correct you can always update them using the steps in third-party-registries-self-hosted.

Create your secrets file from the template below (example-secrets.yaml). You must replace all instances of REPLACE_WITH_BASE64_DOCKER_CREDENTIAL with your actual base64-encoded registry credentials.

$cd path/to/nvcf-self-managed-stack
$touch secrets/<environment-name>-secrets.yaml
$# Copy the template into the file
secrets/example-secrets.yaml
1# Required structure for any environment secrets.
2# This is the minimal set of values to provide.
3
4# Notes:
5# Cassandra:
6# The password should match the value set in the cassandra keyspace migrations
7#
8# API:
9# The value for the registry will be used in three places, as it is
10# expected the same registry is used as a single source for all images.
11# openbao.migrations.env[1].value
12# api.accountBootstrap.registryCredentials[0].secret.value
13# api.accountBootstrap.registryCredentials[1].secret.value
14
15openbao:
16 migrations:
17 env:
18 # Stored in OpenBao shared secrets (written by migration job)
19 - name: DEFAULT_CASSANDRA_PASSWORD
20 value: "ch@ng3m3"
21 # Stored in OpenBao KV for nvcf-api (written by migration job)
22 - name: NVCF_API_SIDECARS_IMAGE_PULL_SECRET
23 value: REPLACE_WITH_BASE64_DOCKER_CREDENTIAL # Replace with base64 credentials (ex. NGC / ECR / etc.) for your registry, refer to Working with Third-Party Registries.
24 - name: ADMIN_CLIENT_ID
25 value: ncp # <- keep this value
26
27api:
28 accountBootstrap:
29 registryCredentials:
30 - registryHostname: nvcr.io # ECR: <your-account-id>.dkr.ecr.<your-region>.amazonaws.com
31 secret:
32 name: nvcr-containers # ECR: ecr-containers
33 value: REPLACE_WITH_BASE64_DOCKER_CREDENTIAL # Replace with base64 credentials (ex. NGC / ECR / etc.) for your registry, refer to Working with Third-Party Registries.
34 artifactTypes: ["CONTAINER"]
35 tags: []
36 description: "NGC Container registry"
37 - registryHostname: helm.ngc.nvidia.com # ECR: <your-account-id>.dkr.ecr.<your-region>.amazonaws.com
38 secret:
39 name: nvcr-helmcharts # ECR: ecr-helmcharts
40 value: REPLACE_WITH_BASE64_DOCKER_CREDENTIAL # Replace with base64 credentials (ex. NGC / ECR / etc.) for your registry, refer to Working with Third-Party Registries.
41 artifactTypes: ["HELM"]
42 tags: []
43 description: "NGC Helm registry"

NVCF supports these registries for function containers (set in api.accountBootstrap.registryCredentials): ACR (Azure), ECR (AWS), NVCR (NVIDIA), VolcEngine CR, JFrog/Artifactory, and Harbor.

Generating Base64-encoded Registry Credentials

Registry credentials must be base64-encoded in the format username:password. For detailed instructions on setting up credentials for specific registries (including IAM user creation for ECR), see third-party-registries-self-hosted.

$# Replace YOUR_NGC_API_KEY with your actual personal NGC API key from ngc.nvidia.com
$echo -n '$oauthtoken:YOUR_NGC_API_KEY' | base64 -w 0

Step 4. Configure image pull secrets (conditional)

Skip this step if you have mirrored NVCF artifacts to a CSP-managed registry (e.g., ECR) and are using a CSP-managed registry with built-in credential helpers (e.g., AWS ECR with IAM node roles, GKE Artifact Registry with Workload Identity, Azure ACR with managed identity). Kubernetes can pull images automatically in those environments.

The secrets file you configured in Step 3 handles API bootstrap registry credentials — these allow the NVCF API service to pull user function containers at runtime. Separately, Kubernetes itself needs image pull secrets to pull the NVCF control plane service images (API, SIS, Cassandra, etc.) from your registry.

If your image registry is private and your cluster nodes do not have built-in credential helpers, you must create Kubernetes docker-registry secrets in each NVCF namespace and configure the helmfile to reference them.

1. Create the pull secret in each NVCF namespace (create-nvcr-pull-secrets.sh):

$export NGC_API_KEY="<your-ngc-api-key>"
$
$for ns in cassandra-system nats-system nvcf api-keys ess sis vault-system; do
$ kubectl create namespace "$ns" --dry-run=client -o yaml | kubectl apply -f -
$done
$
$for ns in cassandra-system nats-system nvcf api-keys ess sis vault-system; do
$ kubectl create secret docker-registry nvcr-pull-secret \
> --docker-server=nvcr.io \
> --docker-username='$oauthtoken' \
> --docker-password="$NGC_API_KEY" \
> --namespace="$ns" \
> --dry-run=client -o yaml | kubectl apply -f -
$done

For registries other than NGC, replace --docker-server, --docker-username, and --docker-password with your registry credentials.

2. Reference the secret in your helmfile environment. The helmfile propagates imagePullSecrets to all NVCF charts automatically. Add the secret name to your environment YAML (e.g. environments/<your-env>.yaml):

1imagePullSecrets:
2 - name: nvcr-pull-secret

This replaces any need for a separate admission controller or policy engine to inject pull secrets.

Step 5. Deploy the NVCF control plane components

Set kubectl context to your cluster.

Ensure your local environment is authenticated to the container registry where your NVCF Helm charts are stored (see [Access Requirements]). Helmfile pulls OCI charts during deployment and will fail if not authenticated.

Before deploying, preview the rendered Kubernetes manifests:

$cd path/to/nvcf-self-managed-stack
$HELMFILE_ENV=<environment-name> helmfile template

This command will:

  1. Render all Helm charts with your environment and secrets
  2. Run validation hooks
  3. Display the resulting Kubernetes manifests

Review the output carefully to ensure:

  • Container image references are correct
  • Storage classes match your clusters

Deploy the self-managed stack:

$HELMFILE_ENV=<environment-name> helmfile sync

The initial deployment takes approximately 5-10 minutes for local development and 10-20 minutes for cloud deployments.

Deployment Progresssion and Monitoring

Helmfile will deploy services in the correct order with dependencies:

Phase 1: Dependency Layer (5-10 minutes)

  • NATS messaging service
  • OpenBao (secrets management)
  • Cassandra (database)
  • Helmfile Selector: release-group=dependencies

Phase 2: Control Plane Services (5-10 minutes)

  • NVCF API Service
  • SIS (Spot Instance Service)
  • gRPC Proxy
  • Invocation Service
  • API Keys Service
  • ESS API
  • Notary Service
  • Admin Issuer Proxy
  • Helmfile Selector: release-group=services

Monitor for account bootstrap failures: Once helmfile reaches Phase 3, open a separate terminal and watch events in the nvcf namespace:

$kubectl get events -n nvcf -w

The account bootstrap job runs as a post-install hook and is the most common failure point (usually due to environment or secrets misconfiguration). If it fails, see [Recovering from Partial Deployments] for recovery steps.

Phase 3: Ingress Configuration (1-2 minutes)

  • Gateway API Routes (if enabled)
  • Helmfile Selector: release-group=ingress

Phase 4: (Optional) GPU Operator (1-2 minutes)

  • Fake GPU Operator (optional, for development environments)
  • Helmfile Selector: release-group=workers

Open a separate terminal to monitor the deployment progress:

Monitor Each Deployment Phase:

$# Check namespace creation and preparation
$kubectl get ns
$
$# Phase 1: Check dependency services (release-group=dependencies)
$kubectl get pods -n nats-system # Should see nats-0, nats-1, nats-2
$kubectl get pods -n vault-system # Should see openbao-server-0, openbao-server-1, openbao-server-2
$kubectl get pods -n cassandra-system # Should see cassandra-0, cassandra-1, cassandra-2
$# Note: It's normal to see cassandra-initialize-cluster pods with "Error" status.
># The initialization job retries on failure - as long as one pod shows "Completed"
># and cassandra-migrations is Running/Completed, the deployment is progressing normally.
>
># Phase 2: Check control plane services (release-group=services)
>kubectl get events -n nvcf -w # Watch for account bootstrap failures
>kubectl get pods -n nvcf # API, invocation-service, grpc-proxy, notary-service
>kubectl get pods -n sis # Spot Instance Service
>kubectl get pods -n api-keys # API Keys service, admin-issuer-proxy
>...
>
># Phase 3: Check ingress (release-group=ingress)
>kubectl get httproutes -A # Gateway API routes (if enabled)

Cassandra initialization pods showing “Error” is expected. The cassandra-initialize-cluster job runs multiple pods in parallel and retries on failure. It is normal to see one or more pods with Error status. The deployment is healthy as long as at least one initialization pod reaches Completed and the cassandra-migrations job completes successfully.

If any pod remains in Pending, ContainerCreating, or ImagePullBackOff state for more than 5 minutes, see self-hosted-troubleshooting for issue identification commands and solutions.

Recovering from Partial Deployments

Do not attempt to fix a partially failed deployment by re-running helmfile sync or helmfile apply. Helm releases in a failed state will skip initialization hooks on subsequent runs, leading to incomplete deployments that appear successful but don’t function correctly.

Redeploying Dependencies (if needed):

If a dependency service (Cassandra, NATS, OpenBao) fails or gets stuck, you can safely redeploy it individually:

$# Redeploy only Cassandra
$HELMFILE_ENV=<environment-name> helmfile --selector name=cassandra apply
$
$# Redeploy all dependencies (NATS, Cassandra, OpenBao)
$HELMFILE_ENV=<environment-name> helmfile --selector release-group=dependencies apply

Recovering from Services Failures (without destroying dependencies):

If the release-group=services deployment hangs or fails (for example, account bootstrap failure due to secrets misconfiguration), you can recover without destroying your dependencies.

1. Monitor for failures:

In a separate terminal, watch events in the nvcf namespace:

$kubectl get events -n nvcf -w

2. Check the account bootstrap logs (if it failed):

$kubectl logs job/nvcf-api-account-bootstrap -n nvcf

The bootstrap job auto-deletes after ~5 minutes. Monitor events to catch failures in real-time.

3. Check the NVCF API logs for detailed error messages:

$kubectl logs -n nvcf -l app.kubernetes.io/name=nvcf-api --tail=100

4. Fix the root cause (e.g., correct your secrets/<environment-name>-secrets.yaml file).

5. Destroy the services and downstream releases:

$# Destroy services release group
$HELMFILE_ENV=<environment-name> helmfile --selector release-group=services destroy
$
$# Destroy downstream releases (ingress, workers, admin-issuer-proxy)
$HELMFILE_ENV=<environment-name> helmfile --selector release-group=ingress destroy
$HELMFILE_ENV=<environment-name> helmfile --selector release-group=workers destroy
$HELMFILE_ENV=<environment-name> helmfile --selector name=admin-issuer-proxy destroy

6. Clean up the service namespaces:

$kubectl delete namespace nvcf api-keys ess sis --ignore-not-found

7. Recreate namespaces and labels (required for Gateway API routing):

$kubectl create namespace api-keys && \
>kubectl create namespace ess && \
>kubectl create namespace sis && \
>kubectl create namespace nvcf
$
$kubectl label namespace api-keys nvcf/platform=true && \
>kubectl label namespace sis nvcf/platform=true && \
>kubectl label namespace ess nvcf/platform=true && \
>kubectl label namespace nvcf nvcf/platform=true

8. Re-sync services (this triggers fresh post-install hooks):

$HELMFILE_ENV=<environment-name> helmfile --selector release-group=services sync

9. Sync remaining releases after services succeed:

$HELMFILE_ENV=<environment-name> helmfile --selector name=admin-issuer-proxy sync
$HELMFILE_ENV=<environment-name> helmfile --selector release-group=ingress sync
$HELMFILE_ENV=<environment-name> helmfile --selector release-group=workers sync

Full Restart (if dependencies are also broken):

If dependencies are corrupted or you prefer a clean slate, follow the complete [Uninstalling] steps, fix your configuration, then redeploy from Step 1.

Recovering from Gateway Address Changes

If your Gateway or its underlying load balancer was deleted and recreated (e.g., due to a TCPRoute misconfiguration or infrastructure change), the external address will change. Services that depend on the domain value — including Gateway API routes, SIS cluster registration, and API hostname resolution — will break until the new address is propagated.

1. Get the new Gateway address:

$GATEWAY_ADDR=$(kubectl get gateway nvcf-gateway -n envoy-gateway -o jsonpath='{.status.addresses[0].value}')
$echo "$GATEWAY_ADDR"

2. Update your environment file with the new address:

$# Edit environments/<environment-name>.yaml
$# Change: domain: "OLD_ADDRESS"
$# To: domain: "NEW_GATEWAY_ADDR"

3. Re-sync ingress and services that depend on the domain:

$# Re-sync gateway routes (picks up new domain)
$HELMFILE_ENV=<environment-name> helmfile --selector release-group=ingress sync
$
$# Re-sync services that embed the domain (API, admin-issuer-proxy)
$HELMFILE_ENV=<environment-name> helmfile --selector release-group=services sync
$HELMFILE_ENV=<environment-name> helmfile --selector name=admin-issuer-proxy sync

4. Verify routes are using the new address:

$kubectl get httproutes -A
$kubectl get tcproutes -A

If you encounter issues during deployment, consult the self-hosted-troubleshooting guide for common problems and solutions.

Step 6: Verify the Installation

Verify the installation is successful by checking the pods are running and the helm releases are successful.

$# View all pods with node assignment and status, should all be Running or Completed state
$kubectl get pods -A -o wide
$
$# Check helm releases status
$helm list -A

Verify API Connectivity (Optional)

If you configured Gateway API ingress, you can verify the NVCF API is accessible by running the following commands.

1. Set up environment variables:

$# Get the Gateway address (from Step 1)
$export GATEWAY_ADDR=$(kubectl get gateway nvcf-gateway -n envoy-gateway -o jsonpath='{.status.addresses[0].value}')
$echo "Gateway Address: $GATEWAY_ADDR"

2. Generate an admin token:

$# Generate an admin API token
$export NVCF_TOKEN=$(curl -s -X POST "http://${GATEWAY_ADDR}/v1/admin/keys" \
> -H "Host: api-keys.${GATEWAY_ADDR}" \
> | grep -o '"value":"[^"]*"' | cut -d'"' -f4)
$
$echo "Token generated: ${NVCF_TOKEN:0:20}..."

3. List functions (should be empty initially):

$# List all functions
$curl -s -X GET "http://${GATEWAY_ADDR}/v2/nvcf/functions" \
> -H "Host: api.${GATEWAY_ADDR}" \
> -H "Authorization: Bearer ${NVCF_TOKEN}" | jq .

Next Steps

After the control plane installation is successfully complete, proceed to self-managed-clusters to set up GPU cluster operations.

Uninstalling

This will delete all NVCF resources including data stored in persistent volumes. Ensure you have backups of any important data.

To remove the NVCF installation:

$HELMFILE_ENV=<environment-name> helmfile destroy

After helmfile destroy completes, clean up the namespaces:

$# Delete NVCF namespaces
$kubectl delete namespace cassandra-system nats-system vault-system \
> nvcf api-keys ess sis \
> --ignore-not-found

To also remove the Gateway infrastructure created in Step 1:

$# Delete the Gateway and GatewayClass resources
$kubectl delete gateway nvcf-gateway -n envoy-gateway --ignore-not-found
$kubectl delete gatewayclass eg --ignore-not-found
$
$# Uninstall Envoy Gateway
$helm uninstall eg -n envoy-gateway-system
$
$# Delete the gateway namespaces
$kubectl delete namespace envoy-gateway envoy-gateway-system --ignore-not-found
$
$# (Optional) Remove Gateway API CRDs if no longer needed
$kubectl delete -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/experimental-install.yaml