Install via Container
dpsctl is published as a container image at nvcr.io/nvidia/dpsctl:<version> (the examples below use 0.8.0). This is the quickest way to get started — no binary download or platform selection required.
Prerequisites
- Linux or macOS operating system
- amd64 or arm64 architecture
- Docker (or a compatible OCI runtime such as Podman)
One-shot docker run
Run any dpsctl subcommand directly:
docker run --rm -it --network host \
nvcr.io/nvidia/dpsctl:0.8.0 \
login --host api.dps --port 443 --insecure-tls-skip-verifyNotes:
--network hostlets the container reachlocalhostand any cluster DNS names that resolve on your host.
Install as a dpsctl Shim
For a smoother experience, install a small wrapper script as ~/.local/bin/dpsctl so every invocation transparently runs the container with sensible defaults (HOME mount, working-directory mount, TTY detection, env-var pass-through):
mkdir -p ~/.local/bin
cat > ~/.local/bin/dpsctl <<'EOF'
#!/bin/bash
# dpsctl wrapper — proxies all calls to the Docker image.
# Override at runtime, e.g.:
# DPSCTL_VERSION=0.8.1 dpsctl ...
# DPSCTL_IMAGE=nvcr.io/nvidia/dpsctl DPSCTL_VERSION=0.8.0 dpsctl ...
IMAGE="${DPSCTL_IMAGE:-nvcr.io/nvidia/dpsctl}"
VERSION="${DPSCTL_VERSION:-0.8.0}"
mkdir -p "${HOME}/.dpsctl"
# Resolve real path so symlinks inside PWD work inside the container
WORKSPACE=$(realpath "${PWD}" 2>/dev/null || echo "${PWD}")
# Build mounts: always include HOME and WORKSPACE at their real paths;
# add /Users only when it exists (macOS native or Lima VM with macOS mount)
MOUNTS=(-v "${HOME}:${HOME}" -v "${WORKSPACE}:${WORKSPACE}")
[ -d "/Users" ] && MOUNTS+=(-v "/Users:/Users:ro")
# Always keep stdin attached; only allocate a TTY when running interactively,
# so redirected/captured output is not corrupted by terminal control sequences.
TTY_FLAGS=(-i)
if [ -t 0 ] && [ -t 1 ]; then
TTY_FLAGS+=(-t)
fi
exec docker run --rm \
--network host \
"${TTY_FLAGS[@]}" \
--user "$(id -u):$(id -g)" \
-e HOME="${HOME}" \
"${MOUNTS[@]}" \
-w "${WORKSPACE}" \
-e DPSCTL_USERNAME \
-e DPSCTL_PASSWORD \
-e DPSCTL_HOST \
-e DPSCTL_PORT \
-e DPSCTL_INSECURE_TLS_SKIP_VERIFY \
-e DPSCTL_INSECURE \
-e DPSCTL_GRPC_TIMEOUT \
-e DPSCTL_OUTPUT \
"${IMAGE}:${VERSION}" \
"$@"
EOF
chmod +x ~/.local/bin/dpsctlThen make sure ~/.local/bin is on your PATH:
export PATH="${HOME}/.local/bin:${PATH}"What the wrapper does:
- Mounts
$HOMEso credentials at~/.dpsctl/credentials.yamlpersist across invocations. - Mounts the current working directory at the same path inside the container so commands like
dpsctl topology import ./datacenter.jsonJust Work. - Uses
--network hostsolocalhostand cluster-internal DNS names resolve from inside the container. - Passes through the standard
DPSCTL_*environment variables (host, port, credentials, output format, etc.). - Allocates a TTY only when stdin/stdout are interactive, so piped output stays clean.
Pinning a Different Version
DPSCTL_IMAGE and DPSCTL_VERSION can be overridden independently — the wrapper joins them as ${IMAGE}:${VERSION}:
# Use a different image tag for one invocation
DPSCTL_VERSION=0.8.1 dpsctl --version
# Override both repo and tag (e.g. internal mirror)
DPSCTL_IMAGE=nvcr.io/nvidia/dpsctl DPSCTL_VERSION=0.8.1 dpsctl --versionExamples Using the Wrapper
Once the shim is on your PATH, every command behaves like a native binary:
dpsctl login --host api.dps --port 443 --insecure-tls-skip-verify
dpsctl topology import ./datacenter.json
dpsctl resource-group list