***
title: NGC Developer Guide
description: >-
Everything you need to know about NVIDIA GPU Cloud (NGC) - the hub for
GPU-optimized AI software.
--------------------------
NGC is NVIDIA's hub for GPU-optimized AI software, serving as a GPU-optimized container registry for GPU-accelerated containers, pre-trained models, and AI toolkits specifically optimized for NVIDIA hardware.
Every asset in NGC is tested for security and validated across different GPUs for performance and scalability. This isn't just a repository - it's a curated, enterprise-grade catalog.
## What's in the NGC Catalog?
| Asset Type | What You Get | Example Use Case |
| ---------------------- | ----------------------------- | --------------------------------- |
| **Containers** | GPU-optimized Docker images | PyTorch, TensorFlow, RAPIDS |
| **Pre-trained Models** | Ready-to-deploy AI models | NeMo LLMs, computer vision models |
| **Helm Charts** | Kubernetes deployment configs | Deploy inference services on K8s |
| **Resources** | Datasets, Jupyter notebooks | Training data, example workflows |
| **SDKs** | Domain-specific toolkits | Healthcare, autonomous vehicles |
| **Collections** | Curated bundles | End-to-end AI pipelines |
Browse the catalog at [catalog.ngc.nvidia.com](https://catalog.ngc.nvidia.com/).
## Quick Start
Go to [ngc.nvidia.com](https://ngc.nvidia.com) and sign up with your NVIDIA account (or corporate SSO).
In the NGC web UI:
1. Click your profile → Setup
2. Generate API Key
3. Save it securely - you'll only see it once!
```bash
# Using pip (recommended)
pip install ngc-cli
# Or download directly
wget -O ngc https://api.ngc.nvidia.com/v2/resources/nvidia/ngc-apps/ngc_cli/versions/3.31.0/files/ngccli_linux.zip
unzip ngccli_linux.zip
chmod +x ngc-cli/ngc
export PATH=$PATH:$(pwd)/ngc-cli
```
```bash
# Interactive setup
ngc config set
# Or set environment variables
export NGC_API_KEY=
export NGC_ORG= # Optional: for enterprise users
```
```bash
# Pull PyTorch container optimized for GPUs
docker pull nvcr.io/nvidia/pytorch:24.01-py3
# Run it
docker run --gpus all -it nvcr.io/nvidia/pytorch:24.01-py3
```
## Core Developer Workflows
### Pulling Containers
```bash
# Format: nvcr.io//:
# NVIDIA's public containers
docker pull nvcr.io/nvidia/pytorch:24.01-py3
docker pull nvcr.io/nvidia/tensorflow:24.01-tf2-py3
docker pull nvcr.io/nvidia/tritonserver:24.01-py3
# Partner containers (ISV)
docker pull nvcr.io/isv-ngc-partner/company/container:version
```
### Downloading Models
```bash
# List available models
ngc registry model list
# Download a specific model
ngc registry model download-version nvidia/nemo/megatron_gpt_345m:1.0
# Download to specific directory
ngc registry model download-version nvidia/nemo/megatron_gpt_345m:1.0 --dest ./models/
```
### Using Resources (Datasets, Notebooks)
```bash
# List resources
ngc registry resource list
# Download a resource
ngc registry resource download-version nvidia/tao/tao_getting_started:1.0
```
### Working with Helm Charts
```bash
# Add NGC Helm repo
helm repo add nvidia https://helm.ngc.nvidia.com/nvidia
# Search for charts
helm search repo nvidia
# Install a chart (e.g., GPU Operator)
helm install gpu-operator nvidia/gpu-operator \
--set driver.enabled=true \
--namespace gpu-operator \
--create-namespace
```
## NGC CLI Power Commands
```bash
# Search the catalog
ngc registry image list --format_type csv | grep pytorch
# Get detailed info about an image
ngc registry image info nvidia/pytorch:24.01-py3
# List all versions of a model
ngc registry model list --org nvidia --name nemo
# Check your orgs and teams
ngc org list
ngc team list
# Upload to private registry (enterprise)
ngc registry image push my-org/my-team/my-container:v1.0
# Scripting: JSON output for automation
ngc registry image list --format_type json | jq '.[] | .name'
```
## Registry Architecture
```mermaid
%%{init: {'themeVariables': {'fontFamily': 'monospace'}}}%%
flowchart LR
A["nvcr.io
(pull/push)"] --> B["Reverse Proxy
(auth/authz)"]
B --> C["Registry Server
(Docker API)"]
D["layers.nvcr.io"] --> E["Container Blobs (US)"]
F["ngc.download.nvidia.com"] --> G["Container Blobs (Global)"]
H["authn.nvidia.com"] --> I["Authentication Service"]
style A text-align:left
style B text-align:left
style C text-align:left
style D text-align:left
style E text-align:left
style F text-align:left
style G text-align:left
style H text-align:left
style I text-align:left
```
**Key URLs:**
* **nvcr.io** - Main registry for pulling containers
* **catalog.ngc.nvidia.com** - Web UI for browsing
* **api.ngc.nvidia.com** - REST API endpoint
## Deployment Options
| Environment | How to Use NGC |
| -------------- | ----------------------------------------------------------- |
| **Local/DGX** | Direct `docker pull nvcr.io/...` |
| **AWS** | NGC containers on EC2 GPU instances, SageMaker integration. |
| **GCP** | NGC containers on Compute Engine with GPUs. |
| **Azure** | NGC containers on Azure VMs with NVIDIA GPUs. |
| **Kubernetes** | NGC Helm charts, GPU Operator. |
| **Edge** | NGC containers optimized for Jetson. |
## Private Registry (Enterprise)
For enterprise users, NGC offers private registries to host your own containers and models:
```bash
# Push to private registry
docker tag my-container:latest nvcr.io/my-org/my-team/my-container:v1.0
docker push nvcr.io/my-org/my-team/my-container:v1.0
# Manage access
ngc org user add --org my-org --email colleague@company.com --role REGISTRY_USER
```
**Roles:**
* `REGISTRY_USER` - Can pull from private registry
* `REGISTRY_READ` - Read-only access
* `ADMIN` - Full management access
## Integration with NVIDIA Tools
| Tool | NGC Integration |
| --------------------------- | ----------------------------------------- |
| **NVIDIA AI Workbench** | Pull models/containers directly from NGC. |
| **NeMo** | Pre-trained LLMs hosted on NGC. |
| **Triton Inference Server** | Container and model hosting. |
| **TAO Toolkit** | Pre-trained vision models. |
| **RAPIDS** | GPU-accelerated data science containers. |
| **DeepStream** | Video analytics containers. |
## Best Practices
NVIDIA updates containers monthly. Use specific tags (`:24.01-py3`) in production, not `:latest`.
All NGC containers pass security scanning. Check the "Security" tab in the catalog for CVE reports.
Download containers and models to a local registry for air-gapped deployments.
Use NGC CLI in pipelines:
```bash
ngc registry image pull nvidia/pytorch:24.01-py3 --format_type json
```
NGC containers are optimized for multi-GPU setups out of the box.
## Essential Links
| Resource | URL |
| ----------------------- | ------------------------------------------------------------------------------------------------- |
| NGC Catalog | [catalog.ngc.nvidia.com](https://catalog.ngc.nvidia.com/) |
| NGC Documentation | [docs.nvidia.com/ngc](https://docs.nvidia.com/ngc/) |
| NGC CLI Docs | [docs.ngc.nvidia.com/cli](https://docs.ngc.nvidia.com/cli/) |
| NGC User Guide | [NGC User Guide](https://docs.nvidia.com/ngc/latest/ngc-user-guide.html) |
| Public Cloud Deployment | [NGC Deploy Guide](https://docs.nvidia.com/ngc/ngc-deploy-public-cloud/) |
| Private Registry Guide | [Private Registry Guide](https://docs.nvidia.com/ngc/latest/ngc-private-registry-user-guide.html) |
## Summary
**NGC = GPU-optimized container registry + model hub + Helm charts**
```bash
# Get started in 3 commands:
pip install ngc-cli
ngc config set # Enter your API key
docker pull nvcr.io/nvidia/pytorch:24.01-py3
```
You're ready to start building.