NVIDIA LAUNCHPAD DOCUMENTATION

Instantly experience end-to-end workflows for AI, data science, 3D design collaboration, and more.

Learn More

To get started, select an experience to view the available documentation.

AI

Train and Deploy an AI Support Chatbot

Train and deploy a Natural Language Processing AI application with Triton Inference Server.

Browse

Train an AI Model for Image Classification of Online Product (VMware vSphere)

Run through an end-to-end data science workflow that uses TensorFlow to train an image classification model and then use NVIDIA’s Triton Inference Server to deploy the model for inference to production.

Browse

Train an AI Model for Image Classification of Online Product (Red Hat OpenShift)

Run through an end-to-end data science workflow that uses TensorFlow to train an image classification model and then use NVIDIA’s Triton Inference Server to deploy the model for inference to production.

Browse

Train a BERT Model for Natural Language Processing (NLP) Applications

In this lab, accomplish NLP tasks, such as question answering (QA) and language inference, with NVIDIA Base Command™. Leverage multiple NVIDIA DGX™ A100 640GB systems to pretrain your own highly accurate BERT model on publicly available plain text.

Browse

Train a Large-Scale NLP Model with NeMo Megatron

In this lab, train large transformer-based language models on multi-GPU, multi-node NVIDIA DGX™ systems. You’ll find that all components of the stack, from silicon to network to software, are optimized and GPU-accelerated, promising you the fastest time to train. Bootstrap your enterprise's large language model (LLM) journey using pretuned hyperparameter configurations for GPT-3 models.

Browse

Deploy Vision AI at the Edge of the Network

Deploy a computer vision application with Fleet Command™, a turnkey cloud service that makes it easy to provision and monitor AI applications on real systems. In this lab, IT can provision new hardware remotely, deploy a vision AI sample application, and experience features designed specifically for edge management.

Browse

Interact with Real-Time Speech AI APIs

Interactively Experience World-Class, Real-Time, Out-Of-The-Box Speech AI—Automatic Speech Recognition and Text-To-Speech—APIs.

Browse

Add Speech Capabilities to a Conversational AI Application

Test Your Conversational AI Application Accuracy and Performance Improvements by Adding Riva Speech AI Skills.

Browse

DATA SCIENCE

Accelerate Data Processing and Train an AI Model to Predict Prices

Experience NVIDIA RAPIDS and the popular New York City taxi dataset to train an XGBoost model and predict ride fares.

Browse

Accelerate Data Processing, Tokenization, and Train an AI Model to Perform Sentiment Analysis

Run through an end-to-end data science workflow which uses NVIDIA RAPIDS, PyTorch Huggingface for training a sentiment analysis model, and NVIDIA’s Triton Inference Server to deploy the model for inference to production.

Browse

Scale Data Science with Domino Enterprise MLOps Platform

Build and scale data science projects using Domino Workspaces.

Browse

Accelerating Apache Spark with Zero Code Changes

Speed up Spark 3 operations leveraging GPU computing with NVIDIA RAPIDS Accelerator for Apache Spark.

Browse

3D DESIGN COLLABORATION AND SIMULATION

Real-Time 3D Design Collaboration in a Physically Accurate Omniverse Scene

Experience Real Time 3D Design Collaboration in a Full-Fidelity, Physically Accurate Omniverse Scene In this experience, design practitioners will learn how to connect 3rd party applications to the Omniverse platform via tutorial videos, then get hands-on experience in a full-fidelity architectural scene in Omniverse. Designers can adjust foliage, props/decor, materials, cameras, and render settings, plus experiment with various extensions such as sun studies. Multiple design practitioners can connect to a shared Omniverse scene and live-edit via Omniverse Nucleus to experience multi-user design collaboration.

Browse

INFRASTRUCTURE OPTIMIZATION EXPERIENCES

Configure and Optimize VMware vSphere for AI and Data Science Workloads

Gain hands-on experience creating VM templates for deploying NVIDIA AI VMs on VMware vSphere.

Browse

Configure, Optimize, and Orchestrate Resources for AI and Data Science Workloads with VMware Tanzu

Deploy on-demand GPU accelerated Kubernetes Clusters running on virtual machines that AI Practitioners, Data Scientists, and/or AI Researchers can quickly access.

Browse

Configure, Optimize, and Orchestrate Resources for AI and Data Science Workloads with Red Hat OpenShift

Deploy on-demand GPU accelerated Kubernetes Clusters running on virtual machines that AI Practitioners, Data Scientists, and/or AI Researchers can quickly access.

Browse

DEVELOPER LABS

NVIDIA Cloud Native Core

NVIDIA Cloud Native Core is a collection of software to run cloud native workloads on NVIDIA GPUs. Cloud Native Core is based on Ubuntu, Kubernetes, Helm and the NVIDIA GPU and Network Operator.

Browse