Quickstart: One-Click Installation
Quickstart: One-Click Installation
Use the one-click CLI flow for a fresh self-hosted NVCF install. The command installs the control plane, registers a GPU cluster, installs the NVIDIA Cluster Agent, and performs basic health checks.
Use this path when you want the fastest route to a working deployment. Use the Helmfile installation path when you need manual release control, partial recovery, or upgrade operations.
What the CLI installs
nvcf-cli self-hosted up runs the self-managed stack in ordered phases:
- Checks local tools and Kubernetes access.
- Resolves the self-managed stack bundle.
- Installs the control plane.
- Initializes CLI authentication.
- Registers the GPU cluster with SIS.
- Installs the compute-plane components, including NVCA.
- Prints a final health summary.
The control plane and GPU cluster can be the same Kubernetes cluster or separate clusters. For a single cluster, use your current kubeconfig context. For separate clusters, pass both kube contexts explicitly.
Prerequisites
Before you run the quickstart, prepare:
nvcf-clikubectlhelmhelmfile1.1.xhelm-diffplugin- Access to the NVCF Helm charts and container images
- A Kubernetes cluster for the control plane
- A GPU cluster with the NVIDIA GPU Operator, or a local k3d cluster with the fake GPU operator
- A default
StorageClass - For remote clusters, Gateway API ingress prepared before install
- For remote clusters, a CLI config that points to the Gateway load balancer
For artifact mirroring, refer to Image Mirroring. For local k3d setup, refer to Local Development.
Choose a cluster layout
The two context flags must be set together. Do not set them to the same value. For a single cluster, omit both flags.
Prepare remote Gateway and CLI config
Skip this section for local k3d. For remote clusters such as Amazon EKS,
complete Gateway quickstart and
Configure the CLI for one-click
before running self-hosted up.
Run the install commands from the directory that contains .nvcf-cli.yaml, or
pass --config .nvcf-cli.yaml to each CLI command.
Run a fresh install
Check prerequisites first:
Run the one-click install:
For separate control-plane and GPU clusters:
Use --stack=/path/to/nvcf-self-managed-stack when testing from a local source-built CLI or when you need to point at a specific stack checkout. Packaged CLI releases can use the packaged stack source unless your release notes say otherwise.
Local k3d quickstart
After you create the k3d cluster, install the fake GPU operator, and install the CSI SMB driver as described in Local Development, use the local route hostnames:
Set the local endpoint environment:
Then run:
If you are testing a local stack checkout, add:
Verify the install
Run the CLI checks:
Confirm Kubernetes resources:
Then create, deploy, and invoke a function using the CLI. For local fake GPU clusters, choose a GPU and instance type that match the discovered node labels and GPU count. For the validated k3d H100 fake GPU setup, use:
Clean up
To remove the compute-plane components:
To remove the control plane:
For manual Helmfile recovery and teardown, refer to Helmfile Installation.
Troubleshooting
If the quickstart fails, start with Troubleshooting.
Common local k3d issues:
sis.localhostmust resolve from the machine runningnvcf-cli.self-hosted check --allis a health check, not a replacement for function deploy and invoke validation.- Local source-built CLI runs can need
--stackuntil the packaged default stack source is available.