NVCA Configuration

View as Markdown

This page documents NVCA configuration options. For cluster registration and lifecycle operations, see self-managed-clusters.

Advanced Settings

Some of the options below are supported only on Cluster Agent Versions 2.50.0 or higher.

See below for descriptions of all available configuration options.

ConfigurationDescription
Cluster Agent VersionVersion of the cluster agent to be installed on the cluster. Defaults to the latest available. Recommended to use the latest version available at the time of registration unless there are business reasons to pick another version.
Node Selector Key and Node Selector ValueThis Key-Value pair is the label selector key to control the placement of the cluster agent and the cluster agent operator pods to specic nodes on the cluster. Not providing a value will allow these infrastructure components to be placed anywhere on the cluster. Ensure there are matching nodes in the cluster using kubectl get node -l key=value before registration as incorrect value will cause operational issues. For additional details: Labels & Selectors
Priority ClassSet appropriate kubernetes priority class name for cluster agent and the operator pod. Additional details: Priority Class
Model Cache Volume Mount OptionsConfigure the model cache volume mount options based on the CSI Driver capabilities on the cluster. Refer to the CSI Driver documentation. Defaults to Enabled and ro,norecovery,nouuid on an upgrade. Requires cluster reconfiguration after upgrade to prevent disruption.Additional details: Mount options
Network CIDR RangeQuoted & comma separated list of CIDR range for outbound network access for the infrastructure components & workloads on the cluster.
Worker Degradation PeriodStabilization time (in minutes) before cluster agent fails to consider a worker as healthy and initiates a purge.

Cluster Features

Cluster Features allow enabling specific features on the cluster. Dynamic GPU Discovery is enabled by default.

See below for descriptions of all cluster features.

CapabilityDescription
Dynamic GPU DiscoveryEnables automatic detection and management of allocatable GPU capacity within the cluster via the NVIDIA GPU Operator. This capability is strongly recommended and would only be disabled in cases where Manual Instance Configuration is required.
Caching SupportEnhances application performance by storing frequently accessed data (models, resources and containers) in a cache. See cluster-caching.
Optimized AI Workload SchedulingEnable support for optimized AI workload scheduling using KAI Scheduler. Additional setup details: KAI Scheduler

Removing the Dynamic GPU Discovery will require manual instance configuration. See Manual Instance Configuration.

Caching Support

Enabling caching for models, resources and containers is recommended for optimal performance. You must create StorageClass configurations for caching within your cluster to fully enable “Caching Support” with the Cluster Agent. See examples below.

Caching is currently not supported for AWS EKS.

StorageClass Configurations in GCP

1kind: StorageClass
2apiVersion: storage.k8s.io/v1
3metadata:
4 name: nvcf-sc
5provisioner: pd.csi.storage.gke.io
6allowVolumeExpansion: true
7volumeBindingMode: Immediate
8reclaimPolicy: Retain
9parameters:
10 type: pd-ssd
11 csi.storage.k8s.io/fstype: xfs
1kind: StorageClass
2apiVersion: storage.k8s.io/v1
3metadata:
4 name: nvcf-cc-sc
5provisioner: pd.csi.storage.gke.io
6allowVolumeExpansion: true
7volumeBindingMode: Immediate
8reclaimPolicy: Retain
9parameters:
10 type: pd-ssd
11 csi.storage.k8s.io/fstype: xfs

GCP currently allows only 10 VM’s to mount a Persistent Volume in Read-Only mode.

StorageClass Configurations in Azure

1kind: StorageClass
2apiVersion: storage.k8s.io/v1
3metadata:
4 name: nvcf-sc
5provisioner: file.csi.azure.com
6allowVolumeExpansion: true
7volumeBindingMode: Immediate
8reclaimPolicy: Retain
9parameters:
10 skuName: Standard_LRS
11 csi.storage.k8s.io/fstype: xfs
1kind: StorageClass
2apiVersion: storage.k8s.io/v1
3metadata:
4 name: nvcf-cc-sc
5provisioner: file.csi.azure.com
6allowVolumeExpansion: true
7volumeBindingMode: Immediate
8reclaimPolicy: Retain
9parameters:
10 skuName: Standard_LRS
11 csi.storage.k8s.io/fstype: xfs

StorageClass Configurations in Oracle Cloud

1kind: StorageClass
2apiVersion: storage.k8s.io/v1
3metadata:
4 name: nvcf-sc
5provisioner: blockvolume.csi.oraclecloud.com
6allowVolumeExpansion: true
7volumeBindingMode: Immediate
8reclaimPolicy: Retain
9parameters:
10 csi.storage.k8s.io/fstype: xfs
1kind: StorageClass
2apiVersion: storage.k8s.io/v1
3metadata:
4 name: nvcf-cc-sc
5provisioner: blockvolume.csi.oraclecloud.com
6allowVolumeExpansion: true
7volumeBindingMode: Immediate
8reclaimPolicy: Retain
9parameters:
10 csi.storage.k8s.io/fstype: xfs

Apply the StorageClass Configurations

Save the StorageClass template to files nvcf-sc.yaml and nvcf-cc-sc.yaml and apply them as:

$kubectl create -f nvcf-sc.yaml
$kubectl create -f nvcf-cc-sc.yaml

Override the Default Mount Options for Cache Volumes

Supported in Cluster Agent Versions 2.45.21 or higher

Please note this is a Post NVCA Install Operation and needs careful consideration to ensure there are no volume corruptions. Use with caution.

Cluster Agent with caching support by default will enable linux mount-options with ro,norecovery,nouuid.

If the CSI Driver in the cluster doesn’t support mount options then you may apply the following command on the cluster to disable the mount options

$nvcf_cluster_name="$(kubectl get nvcfbackends -n nvca-operator -o name | cut -d'/' -f2)"
$kubectl patch nvcfbackends.nvcf.nvidia.io -n nvca-operator "$nvcf_cluster_name" \
> --type='merge' \
> -p '{"spec":{"overrides":{"agentConfig":{"cacheMountOptionsEnabled":false}}}}'

If you want to update the mount-options to a different value for example: ro,norecovery. You may use the following command. Replace these options with desired value as dictated by CSI Driver Volume Mount Options.

$nvcf_cluster_name="$(kubectl get nvcfbackends -n nvca-operator -o name | cut -d'/' -f2)"
$kubectl patch nvcfbackends.nvcf.nvidia.io -n nvca-operator "$nvcf_cluster_name" \
> --type='merge' \
> -p '{"spec":{"overrides":{"agentConfig":{"cacheMountOptionsEnabled":true,"cacheMountOptions":"ro,norecovery"}}}}'

Cluster Maintenance Modes

The Cluster Agent supports two maintenance modes that control how workloads are handled during cluster configuration changes. Configure maintenance mode via the feature flags CordonMaintenance or CordonAndDrainMaintenance respectively.

Cordon Maintenance

In this mode, existing workloads continue to run uninterrupted on the cluster. New workloads will not be scheduled until maintenance mode is cleared.

Cordon and Drain Maintenance

In this mode, all existing workloads in the cluster are terminated. No updates to the state of workloads will be effective while in this mode.

Once maintenance mode is configured, it can take up to 10 minutes for the agent reconfiguration to take effect.

Account-Isolated Clusters

Supported in Cluster Agent Versions 2.49.0 or higher

Clusters with the AccountIsolation attribute have enhanced isolation between workloads, ensuring that function and task instances run on nodes isolated by NCAId. This is particularly important for customers with strict security requirements or those who want to ensure complete separation of workloads at the account level.

In Account Isolated mode, the cluster might be inefficient in GPU utilization if workloads are not designed to utilize the full capacity of the isolated nodes. While toggling this attribute, the cluster workloads also have to be drained using CordonAndDrainMaintenance mode to effectively re-balance the workloads as the attribute will not be applied retroactively.

Clusters with MNNVL GPUs like GB200 can run multi-node workloads that require inter-GPU data transfer with large performance improvements when properly configured. The Cluster Agent can be directed to configure multi-node workloads with their own ComputeDomains automatically to optimize inter-GPU connections.

Additional prerequisites: - The NVIDIA GPU DRA driver must be installed. - The NVLinkOptimized cluster attribute must be added during cluster registration.

In NVLink-optimized mode, the NVIDIA GPU DRA driver currently limits one GPU-enabled Pod to a node. To optimally utilize these clusters, GPU-enabled Pods should request a full Node’s worth of GPUs. For example, Nodes in GB200 clusters have 4 GPUs each so all containers and all GPU-enabled Pods in a workload must request nvidia.com/gpu’s that sum to 4.

Kata Container-Isolated Workloads

Clusters that have this attribute run all function/task Pods in Kata Containers without exception.

Additional cluster restrictions to be aware of:

  • Pod containers must at least have resource limits defined for cpu and memory. If unset, runtime behavior is undefined.

  • Object count limits are configured for resource fairness in these clusters:

    • ConfigMaps: 20
    • Secrets: 20
    • Services: 20
    • Pods: 100
    • Jobs: 10
    • CronJobs: 10
    • Deployments: 10
    • ReplicaSets: 10
    • StatefulSets: 10

Network Configuration

The network policies described in this section are only enforced if your cluster’s Container Network Interface (CNI) supports Kubernetes Network Policies. Common CNIs that support network policies include:

  • Calico
  • Cilium
  • Weave Net
  • Antrea

If your cluster uses a CNI that doesn’t support network policies, the security controls described below will not be enforced, and pods will be able to communicate with each other without restrictions. This could lead to security vulnerabilities.

The NVCA operator requires outbound network connectivity to pull images, charts, and report logs and metrics. During installation, the operator pre-configures the nvca-namespace-networkpolicies configmap with the following network policies:

Policy NameDescription
allow-egress-gxcacheAllows egress traffic to the GX Cache namespace for caching operations (only relevant for NVIDIA managed clusters)
allow-egress-internet-no- internal-no-apiAllows egress traffic to the public internet (0.0.0.0/0) but blocks traffic to common private IP ranges. Also allows DNS resolution via kube-dns.
allow-egress-intra-namespaceControls pod-to-pod communication within the same namespace. This policy is only applied to function namespaces and not to shared pod instance namespaces.
allow-egress-nvcf-cacheAllows egress traffic to NVCF cache services (only relevant for NVIDIA managed clusters)
allow-egress-prometheus- nvcf-byooAllows egress traffic to Prometheus monitoring endpoints (only relevant for NVIDIA managed clusters)
allow-ingress-monitoringAllows ingress traffic for monitoring services
allow-ingress-monitoring-dcgmAllows ingress traffic for DCGM monitoring
allow-ingress-monitoring- gxcacheAllows ingress traffic for GX Cache monitoring (only relevant for NVIDIA managed clusters)

Key Network Requirements

  1. Kubernetes API Access

    • NVCA requires access to the Kubernetes API
    • Consult your cloud provider’s documentation (e.g., Azure, AWS, GCP) for the Kubernetes API endpoint
  2. Container Registry and NVCF Control Plane Access

    • NVCA requires access to your container registry to pull images and Helm charts.
    • NVCA requires network access to NVCF control plane services (SIS, NATS, ESS) running in your cluster. The specific endpoints depend on your gateway configuration. See gateway-routing for details.
  3. Monitoring and Logging

    • If your environment requires advanced monitoring or logging (e.g., sending logs to external endpoints), ensure your cluster’s NetworkPolicy or firewall rules allow egress to the required monitoring/logging domains

Network Policy Customization via ConfigMap

The NVCA operator pre-configures the nvca-namespace-networkpolicies configmap during installation. If you need to customize these policies for your cluster, you can use a configmap to override the default policies.

To customize a network policy:

  1. Create a configmap with your custom network policy, for example:

    1apiVersion: v1
    2kind: ConfigMap
    3metadata:
    4 name: demopatch-configmap
    5 namespace: nvca-operator
    6 labels:
    7 nvca.nvcf.nvidia.io/operator-kustomization: enabled
    8data:
    9 patches: |
    10 - target:
    11 group: ""
    12 version: v1
    13 kind: ConfigMap
    14 name: nvca-namespace-networkpolicies
    15 patch: |-
    16 - op: replace
    17 path: /data/allow-egress-internet-no-internal-no-api
    18 value: |
    19 apiVersion: networking.k8s.io/v1
    20 kind: NetworkPolicy
    21 metadata:
    22 name: allow-egress-internet-no-internal-no-api
    23 labels:
    24 app.kubernetes.io/name: nvca
    25 app.kubernetes.io/instance: nvca
    26 app.kubernetes.io/version: "1.0"
    27 app.kubernetes.io/managed-by: nvca-operator
    28 spec:
    29 podSelector: {}
    30 policyTypes:
    31 - Egress
    32 egress:
    33 - to:
    34 - namespaceSelector: {}
    35 podSelector:
    36 matchLabels:
    37 k8s-app: kube-dns
    38 - to:
    39 - namespaceSelector:
    40 matchLabels:
    41 kubernetes.io/metadata.name: gxcache
    42 ports:
    43 - port: 8888
    44 protocol: TCP
    45 - port: 8889
    46 protocol: TCP
2. Apply the configmap:
```bash
kubectl apply -f patchcm.yaml
  1. Verify the changes:

    $kubectl logs -n nvca-operator -l app.kubernetes.io/name=nvca-operator
You should see a message indicating successful patching:
`configmap patched successfully`
The changes will be applied to the `nvcf-backend` namespace and will be used for all new namespaces' network policies. The network policies will also be updated across all helm chart namespaces.
## Network Policy Customization via clusterNetworkCIDRs Flag
You can customize the `allow-egress-internet-no-internal-no-api` policy with helm, by adding on the `networkPolicy.clusterNetworkCIDRs` flag. For example:
```bash
helm upgrade nvca-operator -n nvca-operator --create-namespace -i --reuse-values --wait \
oci://${REGISTRY}/${REPOSITORY}/nvca-operator --version <version> \
--set networkPolicy.clusterNetworkCIDRs="{10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,100.64.0.0/12}"

This command will override the default k8s networking CIDRs specified in the allow-egress-internet-no-internal-no-api with your input.

Advanced: Additional Configuration Options

CSI Volume Mount Options

The NVIDIA Cluster Agent supports customizing CSI volume mount options for caching. This allows you to configure specific mount options for the CSI volumes used in your cluster.

CSI volume mount options configuration is an experimental feature and may be subject to change in future releases.

To configure CSI volume mount options:

  1. Get the NVCF cluster name:
$nvcf_cluster_name="$(kubectl get nvcfbackends -n nvca-operator -o name | cut -d'/' -f2)"
  1. View current mount options configuration:
$kubectl get nvcfbackend -n nvca-operator "$nvcf_cluster_name" -o yaml | grep -A 5 "MountOptions"
  1. Set mount options (example):
$kubectl patch nvcfbackends.nvcf.nvidia.io -n nvca-operator "$nvcf_cluster_name" \
> --type='merge' \
> -p '{"spec":{"overrides":{"agentConfig":{"cacheMountOptionsEnabled":true,"cacheMountOptions":"ro,norecovery,nouuid"}}}}'
  1. Verify the changes:
$kubectl get nvcfbackend -n nvca-operator "$nvcf_cluster_name" -o yaml | grep -A 5 "MountOptions"

The default mount options are: - ro: Read-only mount - norecovery: Skip journal recovery - nouuid: Ignore filesystem UUID

You can modify these options based on your specific requirements. The configuration will be applied to all CSI volumes created by the NVIDIA Cluster Agent for caching purposes.

Node Selection for Cloud Functions

By default, the cluster agent uses all nodes discovered with GPU resources to schedule Cloud Functions and there are no additional configuration required.

In order to limit the nodes that can run Cloud Functions, you may use nvca.nvcf.nvidia.io/schedule=true label on the specific nodes.

If there are no nodes in the cluster with the nvca.nvcf.nvidia.io/schedule=true label set, the cluster agent will switch to the default behavior of using all nodes with GPUs.

For example, to mark specific nodes as schedulable in a cluster:

$kubectl label node <node-name> nvca.nvcf.nvidia.io/schedule=true

To mark a single node from the above set as unschedulable for nvcf workloads, you can unlabel using:

$kubectl label node <node-name> nvca.nvcf.nvidia.io/schedule-

GPU Product Name Override

The NVIDIA Cluster Agent supports GPU product name override via node label. This is useful for customers who want to use a custom product name or override the default GPU product name.

For example, to set the GPU product name for a node, use the following command:

$kubectl label node <node-name> nvca.nvcf.nvidia.io/gpu.product=<product-name>

The GPU Product Name Override via node labeling only takes effect when there are no pre-existing active instances in the cluster. If active instances already exist with the original GPU instance types, the override will not be applied.

Managing Feature Flags

The NVIDIA Cluster Agent supports various feature flags that can be enabled or disabled to customize its behavior. The following are some commonly used feature flags:

Feature FlagDescription
DynamicGPUDiscoveryDynamically discover GPUs and instance types on this cluster. This is enabled by default for customer-managed clusters.
HelmSharedStorageConfigure Helm functions and tasks with shared read-only storage for ESS secrets. This is required for enabling Helm-based tasks in your cluster. Please note turning on this feature flag requires additional configuration, see Helm Shared Storage section below.
LogPostingPost instance logs to SIS directly. This is enabled by default for NVIDIA managed clusters.
MultiNodeWorkloadsInstruct NVCA to report multi-node instance types to SIS during registration.
SelfHostedEnables local vault-based authentication for self-hosted deployments. Required when ngcConfig.clusterSource is self-managed.
HelmAllowCPUNodesAllow CPU-only pods (e.g. etcd, redis, envoy) from Helm-based functions to be scheduled on non-GPU nodes; GPU pods keep required instance-type affinity. Reduces cost and improves GPU utilization. Mutually exclusive with HelmResourceConstraints. See helm-allow-cpu-nodes.

Setting Feature Flags at Install Time

Feature flags can be set during the initial NVCA Operator installation through Helm values.

Standalone Helm

Set selfManaged.featureGateValues in your values file. The chart default is ["DynamicGPUDiscovery"].

In the values file:

1selfManaged:
2 featureGateValues: ["DynamicGPUDiscovery", "SelfHosted", "LogPosting"]

Or via --set during install:

$helm upgrade --install nvca-operator \
> oci://${REGISTRY}/${REPOSITORY}/nvca-operator \
> --version 1.2.7 \
> --namespace nvca-operator --create-namespace \
> -f nvca-operator-values.yaml \
> --set 'selfManaged.featureGateValues={DynamicGPUDiscovery,SelfHosted,LogPosting}'

The --set flag replaces the entire list. You must include all desired flags, not just the new one.

To update flags on an existing installation, run helm upgrade with the updated values file or --set:

$helm upgrade nvca-operator \
> oci://${REGISTRY}/${REPOSITORY}/nvca-operator \
> --version 1.2.7 \
> --namespace nvca-operator \
> -f nvca-operator-values.yaml \
> --set 'selfManaged.featureGateValues={DynamicGPUDiscovery,SelfHosted,LogPosting,MultiNodeWorkloads}'

Helmfile

The Helmfile deployment uses the same selfManaged.featureGateValues chart value. By default, the helmfile does not set this field, so the chart default ["DynamicGPUDiscovery"] applies.

To override, add featureGateValues to the worker release values in helmfile.d/03-worker.yaml.gotmpl:

1- selfManaged:
2 featureGateValues: ["DynamicGPUDiscovery", "SelfHosted", "LogPosting"]
3 imageCredHelper:
4 imageRepository: {{ .Values.global.image.registry }}/{{ .Values.global.image.repository }}/nvcf-image-credential-helper
5 sharedStorage:
6 imageRepository: {{ .Values.global.image.registry }}/{{ .Values.global.image.repository }}/samba

Alternatively, set it in an environment-specific values file (e.g., environments/<env>.yaml) under the same key path, which avoids editing the shared helmfile template.

After changing, run helmfile --selector release-group=workers sync to apply.

Verifying Feature Flags

After installing or upgrading, verify the active feature flags:

$nvcf_cluster_name="$(kubectl get nvcfbackends -n nvca-operator -o name | cut -d'/' -f2)"
$kubectl get nvcfbackends -n nvca-operator "$nvcf_cluster_name" -o jsonpath='{.spec.featureGate.values}' && echo ""

The NVCA agent pod command-line args also reflect the active flags:

$kubectl get pods -n nvca-system -o yaml | grep -i feature

Modifying Feature Flags at Runtime

Feature flags can also be modified at runtime by patching the NVCFBackend resource directly. This is useful for quick changes without running a helm upgrade.

Prefer helm upgrade with updated values to change feature flags. Direct patches to the NVCFBackend will be overwritten on the next Helm upgrade.

  1. Get the NVCF cluster name:
$nvcf_cluster_name="$(kubectl get nvcfbackends -n nvca-operator -o name | cut -d'/' -f2)"
  1. View current feature flags:
$kubectl get nvcfbackends -n nvca-operator -o yaml | grep -A 5 "featureGate:"
  1. Patch the feature flags. Note that this will override all feature flags.

When modifying feature flags, you must preserve any existing feature flags you want to keep. The patch command will override all feature flags, so you need to include all desired feature flags in the value array.

$kubectl patch nvcfbackends.nvcf.nvidia.io -n nvca-operator "$nvcf_cluster_name" --type=merge -p '{"spec":{"overrides":{"featureGate":{"values":["LogPosting","CachingSupport"]}}}}'

As an alternative to the patch command, you can also modify the feature flags using the edit command:

$kubectl edit nvcfbackend -n nvca-operator
$...
$spec:
$ featureGate:
$ values:
$ - LogPosting # Existing feature flag
$ overrides:
$ featureGate:
$ values:
$ - LogPosting # Existing feature flag copied over
$ - -CachingSupport # Caching support disabled
$ ...
  1. Verify the changes:
$kubectl get pods -n nvca-system -o yaml | grep -i feature

CPU-only pod scheduling (HelmAllowCPUNodes)

Supported in Cluster Agent 2.50.4 or higher.

When HelmAllowCPUNodes is enabled, NVCA schedules CPU-only pods from Helm-based functions on non-GPU nodes while keeping GPU pods on GPU nodes with their required instance-type affinity. This reduces infrastructure cost and improves GPU utilization.

Scheduling behavior

Pod typeScheduling behavior
GPU podsRequired instance-type node affinity (unchanged).
CPU-only podsPreferred anti-affinity (weight 100) for nodes with the instance-type label; schedules on CPU-only nodes when available, can fall back to GPU nodes if needed.

HelmAllowCPUNodes cannot be enabled when HelmResourceConstraints is enabled. HelmResourceConstraints is enabled by default. You must disable it first by adding -HelmResourceConstraints to the feature gate values, then add HelmAllowCPUNodes.

How to enable

  1. Follow the steps in Modifying Feature Flags at Runtime.

  2. In spec.overrides.featureGate.values, include all existing flags you want to keep, add -HelmResourceConstraints, then add HelmAllowCPUNodes.

    Example (preserve your existing flags and add the two changes):

    1spec:
    2 overrides:
    3 featureGate:
    4 values:
    5 - <existing flags> # copy existing flags here
    6 - -HelmResourceConstraints
    7 - HelmAllowCPUNodes

How to disable

Remove HelmAllowCPUNodes from the values list, or explicitly disable it:

1spec:
2 overrides:
3 featureGate:
4 values:
5 - -HelmAllowCPUNodes

Enable Helm Shared Storage

The NVIDIA Cluster Agent supports shared storage for Helm charts through the SMB CSI driver. This feature is required for enabling Helm-based tasks in your cluster.

The Helm shared storage feature must be enabled before you can use Helm-based tasks in your cluster. This feature provides the necessary storage infrastructure for Helm chart operations.

When enabling the Helm shared storage feature flag, you must preserve any existing feature flags. The patch command will override all feature flags, so you need to include all desired feature flags in the value array. If you already have other feature flags enabled, you should include them along with “HelmSharedStorage” in the value array.

  1. First, install the SMB CSI driver using Helm:
$helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
$helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.16.0
  1. Get the NVCF cluster name:
$nvcf_cluster_name="$(kubectl get nvcfbackends -n nvca-operator -o name | cut -d'/' -f2)"
  1. Enable the Helm shared storage feature flag:
$kubectl patch nvcfbackends.nvcf.nvidia.io -n nvca-operator "$nvcf_cluster_name" --type=merge -p '{"spec":{"overrides":{"featureGate":{"values":["LogPosting","HelmSharedStorage", "CachingSupport"]}}}}'
  1. Verify that the feature flag is enabled:
$kubectl get pods -n nvca-system -o yaml | grep HelmSharedStorage

Agent Config Merging

The NVIDIA Cluster Agent supports merging custom configuration into the generated NVCA config via the agentConfig.mergeConfig Helm value. This allows you to override or extend NVCA runtime settings without modifying the operator’s config generation logic.

When agentConfig.mergeConfig is set, the Helm chart creates a ConfigMap called agent-config-merge containing the provided YAML. This ConfigMap is mounted into the NVCA pod and merged with the generated config at runtime.

Example values.yaml:

1agentConfig:
2 mergeConfig: |
3 agent:
4 logLevel: debug

Apply via Helm:

$helm upgrade nvca-operator -n nvca-operator --create-namespace -i \
> <chart-reference> \
> --set-file agentConfig.mergeConfig=my-nvca-config.yaml

Or include it in a values file passed to helm upgrade -f values.yaml.

Manual Instance Configuration

It is highly recommended to rely on Dynamic GPU Discovery (and therefore the NVIDIA GPU Operator), as manual instance configuration is error-prone.

This type of configuration is only necessary when the cluster cloud provider does not support the NVIDIA GPU Operator.

Manual instance configuration allows you to disable Dynamic GPU Discovery and instead provide a static list of instance types that NVCA will register with the NVCF control plane. This is useful when:

  • You have a known, fixed set of GPU configurations
  • Dynamic GPU discovery isn’t working correctly for your environment
  • You want to control exactly which instance types are available

By default, NVCA uses Dynamic GPU Discovery to automatically detect GPUs on cluster nodes and register appropriate instance types. When this is disabled, NVCA instead reads a static GPU configuration from a ConfigMap.

Prerequisites

  • A working NVCF cluster with nvca-operator installed

  • Access to modify Helm values for nvca-operator

  • Since you are not using the GPU Operator, you must ensure each GPU node has the instance-type label that matches the “value” field in your manual configuration:

    $kubectl label nodes <node-name> nvca.nvcf.nvidia.io/instance-type=<instance-type-value>

    For example, if your configuration specifies "value": "OCI.GPU.A10", you would label the node with:

    $kubectl label nodes gpu-node-1 nvca.nvcf.nvidia.io/instance-type=OCI.GPU.A10

Step 1: Create the GPU Configuration JSON

Create a JSON file defining your GPU types and instance configurations. The configuration is an array of GPU types, each containing an array of instance types.

Example Configuration (gpu-config.json):

1[
2 {
3 "name": "H100",
4 "capacity": 8,
5 "instanceTypes": [
6 {
7 "name": "ON-PREM.GPU.H100_1x",
8 "value": "ON-PREM.GPU.H100",
9 "description": "One Nvidia Hopper GPU",
10 "default": true,
11 "cpuCores": 16,
12 "systemMemory": "128G",
13 "gpuMemory": "80G",
14 "gpuCount": 1,
15 "os": "linux",
16 "driverVersion": "535.135.05",
17 "cpuArch": "amd64",
18 "storage": "1Ti"
19 },
20 {
21 "name": "ON-PREM.GPU.H100_8x",
22 "value": "ON-PREM.GPU.H100",
23 "description": "Eight Nvidia Hopper GPUs (Full Node)",
24 "default": false,
25 "cpuCores": 128,
26 "systemMemory": "1Ti",
27 "gpuMemory": "640G",
28 "gpuCount": 8,
29 "os": "linux",
30 "driverVersion": "535.135.05",
31 "cpuArch": "amd64",
32 "storage": "8Ti"
33 }
34 ]
35 }
36]

Step 2: Base64 Encode the Configuration

The GPU configuration must be Base64-encoded for the Helm values. Use the following command:

$# Base64 encode the configuration (without line wrapping)
$GPU_CONFIG_B64=$(cat gpu-config.json | base64 -w 0)
$echo $GPU_CONFIG_B64

On macOS, use base64 without the -w 0 flag:

$GPU_CONFIG_B64=$(cat gpu-config.json | base64)

Step 3: Configure Helm Values

Update your nvca-operator Helm values to disable Dynamic GPU Discovery and provide the manual configuration.

Example values.yaml:

1selfManaged:
2 featureGateValues: []
3 gpuManualInstanceConfigB64: "<your-base64-encoded-config>"

By default selfManaged.featureGateValues is ["DynamicGPUDiscovery"]. Set it to an empty list ([]) to disable dynamic discovery and use your manual configuration instead.

Step 4: Install or Upgrade the Operator

Apply the configuration using Helm:

$helm upgrade nvca-operator -n nvca-operator --create-namespace -i \
> <chart-reference> \
> -f values.yaml

If you are using the NVCF self-hosted Helmfile, add your values file as an entry in the nvca-operator release values: list and run helmfile sync or helmfile apply instead.

Configuration Fields Reference

GPU Type Fields:

FieldRequiredDescription
nameYesGPU type name (e.g., “A100”, “L40”, “H100”). Must match the GPU product name reported by nvidia-smi.
capacityNoTotal GPU capacity for this type. Used for resource accounting and quota management.
instanceTypesYesArray of instance type configurations for this GPU type.

Instance Type Fields:

FieldRequiredDescription
nameYesUnique instance type identifier (e.g., “ON-PREM.GPU.H100_1x”). This is the name users select when deploying functions.
valueYesInstance type value used for internal matching (e.g., “ON-PREM.GPU.H100”). Must match the nvca.nvcf.nvidia.io/instance-type node label.
descriptionNoHuman-readable description displayed in the UI.
defaultNoWhether this is the default instance type for this GPU. Only one instance type per GPU should be marked as default.
cpuCoresYesNumber of CPU cores allocated to workloads using this instance type.
systemMemoryYesSystem RAM allocation (e.g., “28G”, “128G”, “1Ti”). Uses Kubernetes quantity format.
gpuMemoryYesTotal GPU memory for this instance type. For multi-GPU instances, this is the total across all GPUs.
gpuCountYesNumber of GPUs in this instance type.
osNoOperating system (e.g., “linux”).
driverVersionNoNVIDIA driver version (e.g., “535.135.05”).
cpuArchNoCPU architecture (e.g., “amd64”, “arm64”).
storageNoStorage allocation per instance (e.g., “512Gi”, “1Ti”). Uses Kubernetes quantity format.

Verification

After applying the configuration, verify that NVCA is using the static configuration:

  1. Check the NVCFBackend resource:

    $kubectl get nvcfbackend -n nvca-operator -o yaml

    Look for -DynamicGPUDiscovery in the feature gates and verify the GPU configuration is present.

  2. Check the nvca-config ConfigMap:

    $kubectl get configmap nvca-config -n nvca-system -o yaml

    The gpus key should contain your JSON configuration.

  3. Check NVCA logs for registration:

    $kubectl logs -n nvca-system -l app=nvca | grep -i "registration\|instance"

Troubleshooting

Configuration Not Applied:

  1. Verify Dynamic GPU Discovery is disabled in the feature gate values
  2. Ensure the Base64 encoding is correct and doesn’t contain line breaks
  3. Check that the JSON is valid before encoding

Invalid JSON Format:

  1. Validate your JSON using a JSON validator before encoding
  2. Ensure all required fields are present
  3. Check that numeric values (cpuCores, gpuCount) are not quoted as strings

Memory/Storage Format Errors:

Memory and storage values must use valid Kubernetes quantity format:

  • Valid: "28G", "128Gi", "1Ti", "512Mi"
  • Invalid: "28GB", "128 Gi", "1TB"

Use G or Gi for gigabytes, T or Ti for terabytes. The i suffix indicates binary units (1024-based).

Cloud Provider-Specific Notes

Oracle Cloud Infrastructure (OCI)

When using Oracle Container Engine for Kubernetes (OKE), ensure that:

  • Your compute nodes and GPU nodes are in the same availability domain
  • This is required for proper network connectivity between the NVIDIA Cluster Agent and GPU nodes
  • Flannel CNI is the current recommended and validated CNI vs OCI native CNI for OKE cluster networking.