NGC Developer Guide

View as Markdown

NGC is NVIDIA’s hub for GPU-optimized AI software, serving as a GPU-optimized container registry for GPU-accelerated containers, pre-trained models, and AI toolkits specifically optimized for NVIDIA hardware.

Every asset in NGC is tested for security and validated across different GPUs for performance and scalability. This isn’t just a repository - it’s a curated, enterprise-grade catalog.

What’s in the NGC Catalog?

Asset TypeWhat You GetExample Use Case
ContainersGPU-optimized Docker imagesPyTorch, TensorFlow, RAPIDS
Pre-trained ModelsReady-to-deploy AI modelsNeMo LLMs, computer vision models
Helm ChartsKubernetes deployment configsDeploy inference services on K8s
ResourcesDatasets, Jupyter notebooksTraining data, example workflows
SDKsDomain-specific toolkitsHealthcare, autonomous vehicles
CollectionsCurated bundlesEnd-to-end AI pipelines

Browse the catalog at catalog.ngc.nvidia.com.

Quick Start

1

Create an NGC Account

Go to ngc.nvidia.com and sign up with your NVIDIA account (or corporate SSO).

2

Generate an API Key

In the NGC web UI:

  1. Click your profile → Setup
  2. Generate API Key
  3. Save it securely - you’ll only see it once!
3

Install NGC CLI

$# Using pip (recommended)
$pip install ngc-cli
$
$# Or download directly
$wget -O ngc https://api.ngc.nvidia.com/v2/resources/nvidia/ngc-apps/ngc_cli/versions/3.31.0/files/ngccli_linux.zip
$unzip ngccli_linux.zip
$chmod +x ngc-cli/ngc
$export PATH=$PATH:$(pwd)/ngc-cli
4

Configure Authentication

$# Interactive setup
$ngc config set
$
$# Or set environment variables
$export NGC_API_KEY=<your-api-key>
$export NGC_ORG=<your-org> # Optional: for enterprise users
5

Pull Your First Container

$# Pull PyTorch container optimized for GPUs
$docker pull nvcr.io/nvidia/pytorch:24.01-py3
$
$# Run it
$docker run --gpus all -it nvcr.io/nvidia/pytorch:24.01-py3

Core Developer Workflows

Pulling Containers

$# Format: nvcr.io/<organization>/<repository>:<tag>
$
$# NVIDIA's public containers
>docker pull nvcr.io/nvidia/pytorch:24.01-py3
>docker pull nvcr.io/nvidia/tensorflow:24.01-tf2-py3
>docker pull nvcr.io/nvidia/tritonserver:24.01-py3
>
># Partner containers (ISV)
>docker pull nvcr.io/isv-ngc-partner/company/container:version

Downloading Models

$# List available models
$ngc registry model list
$
$# Download a specific model
$ngc registry model download-version nvidia/nemo/megatron_gpt_345m:1.0
$
$# Download to specific directory
$ngc registry model download-version nvidia/nemo/megatron_gpt_345m:1.0 --dest ./models/

Using Resources (Datasets, Notebooks)

$# List resources
$ngc registry resource list
$
$# Download a resource
$ngc registry resource download-version nvidia/tao/tao_getting_started:1.0

Working with Helm Charts

$# Add NGC Helm repo
$helm repo add nvidia https://helm.ngc.nvidia.com/nvidia
$
$# Search for charts
$helm search repo nvidia
$
$# Install a chart (e.g., GPU Operator)
$helm install gpu-operator nvidia/gpu-operator \
> --set driver.enabled=true \
> --namespace gpu-operator \
> --create-namespace

NGC CLI Power Commands

$# Search the catalog
$ngc registry image list --format_type csv | grep pytorch
$
$# Get detailed info about an image
$ngc registry image info nvidia/pytorch:24.01-py3
$
$# List all versions of a model
$ngc registry model list --org nvidia --name nemo
$
$# Check your orgs and teams
$ngc org list
$ngc team list
$
$# Upload to private registry (enterprise)
$ngc registry image push my-org/my-team/my-container:v1.0
$
$# Scripting: JSON output for automation
$ngc registry image list --format_type json | jq '.[] | .name'

Registry Architecture

Key URLs:

  • nvcr.io - Main registry for pulling containers
  • catalog.ngc.nvidia.com - Web UI for browsing
  • api.ngc.nvidia.com - REST API endpoint

Deployment Options

EnvironmentHow to Use NGC
Local/DGXDirect docker pull nvcr.io/...
AWSNGC containers on EC2 GPU instances, SageMaker integration.
GCPNGC containers on Compute Engine with GPUs.
AzureNGC containers on Azure VMs with NVIDIA GPUs.
KubernetesNGC Helm charts, GPU Operator.
EdgeNGC containers optimized for Jetson.

Private Registry (Enterprise)

For enterprise users, NGC offers private registries to host your own containers and models:

$# Push to private registry
$docker tag my-container:latest nvcr.io/my-org/my-team/my-container:v1.0
$docker push nvcr.io/my-org/my-team/my-container:v1.0
$
$# Manage access
$ngc org user add --org my-org --email colleague@company.com --role REGISTRY_USER

Roles:

  • REGISTRY_USER - Can pull from private registry
  • REGISTRY_READ - Read-only access
  • ADMIN - Full management access

Integration with NVIDIA Tools

ToolNGC Integration
NVIDIA AI WorkbenchPull models/containers directly from NGC.
NeMoPre-trained LLMs hosted on NGC.
Triton Inference ServerContainer and model hosting.
TAO ToolkitPre-trained vision models.
RAPIDSGPU-accelerated data science containers.
DeepStreamVideo analytics containers.

Best Practices

NVIDIA updates containers monthly. Use specific tags (:24.01-py3) in production, not :latest.

All NGC containers pass security scanning. Check the “Security” tab in the catalog for CVE reports.

Download containers and models to a local registry for air-gapped deployments.

Use NGC CLI in pipelines:

$ngc registry image pull nvidia/pytorch:24.01-py3 --format_type json

NGC containers are optimized for multi-GPU setups out of the box.

ResourceURL
NGC Catalogcatalog.ngc.nvidia.com
NGC Documentationdocs.nvidia.com/ngc
NGC CLI Docsdocs.ngc.nvidia.com/cli
NGC User GuideNGC User Guide
Public Cloud DeploymentNGC Deploy Guide
Private Registry GuidePrivate Registry Guide

Summary

NGC = GPU-optimized container registry + model hub + Helm charts

$# Get started in 3 commands:
$pip install ngc-cli
$ngc config set # Enter your API key
$docker pull nvcr.io/nvidia/pytorch:24.01-py3

You’re ready to start building.