Instantly experience end-to-end workflows for AI, data science, 3D design collaboration, and more.
Learn MoreTo get started, select an experience to view the available documentation.
Train and deploy a Natural Language Processing AI application with Triton Inference Server.
BrowseRun through an end-to-end data science workflow that uses TensorFlow to train an image classification model and then use NVIDIA’s Triton Inference Server to deploy the model for inference to production.
BrowseRun through an end-to-end data science workflow that uses TensorFlow to train an image classification model and then use NVIDIA’s Triton Inference Server to deploy the model for inference to production.
BrowseIn this lab, accomplish NLP tasks, such as question answering (QA) and language inference, with NVIDIA Base Command™. Leverage multiple NVIDIA DGX™ A100 640GB systems to pretrain your own highly accurate BERT model on publicly available plain text.
BrowseIn this lab, train large transformer-based language models on multi-GPU, multi-node NVIDIA DGX™ systems. You’ll find that all components of the stack, from silicon to network to software, are optimized and GPU-accelerated, promising you the fastest time to train. Bootstrap your enterprise's large language model (LLM) journey using pretuned hyperparameter configurations for GPT-3 models.
BrowseDeploy a computer vision application with Fleet Command™, a turnkey cloud service that makes it easy to provision and monitor AI applications on real systems. In this lab, IT can provision new hardware remotely, deploy a vision AI sample application, and experience features designed specifically for edge management.
BrowseInteractively Experience World-Class, Real-Time, Out-Of-The-Box Speech AI—Automatic Speech Recognition and Text-To-Speech—APIs.
BrowseTest Your Conversational AI Application Accuracy and Performance Improvements by Adding Riva Speech AI Skills.
BrowseExperience NVIDIA RAPIDS and the popular New York City taxi dataset to train an XGBoost model and predict ride fares.
BrowseRun through an end-to-end data science workflow which uses NVIDIA RAPIDS, PyTorch Huggingface for training a sentiment analysis model, and NVIDIA’s Triton Inference Server to deploy the model for inference to production.
BrowseBuild and scale data science projects using Domino Workspaces.
BrowseExperience Real Time 3D Design Collaboration in a Full-Fidelity, Physically Accurate Omniverse Scene In this experience, design practitioners will learn how to connect 3rd party applications to the Omniverse platform via tutorial videos, then get hands-on experience in a full-fidelity architectural scene in Omniverse. Designers can adjust foliage, props/decor, materials, cameras, and render settings, plus experiment with various extensions such as sun studies. Multiple design practitioners can connect to a shared Omniverse scene and live-edit via Omniverse Nucleus to experience multi-user design collaboration.
BrowseGain hands-on experience creating VM templates for deploying NVIDIA AI VMs on VMware vSphere.
BrowseDeploy on-demand GPU accelerated Kubernetes Clusters running on virtual machines that AI Practitioners, Data Scientists, and/or AI Researchers can quickly access.
BrowseDeploy on-demand GPU accelerated Kubernetes Clusters running on virtual machines that AI Practitioners, Data Scientists, and/or AI Researchers can quickly access.
BrowseNVIDIA Cloud Native Core is a collection of software to run cloud native workloads on NVIDIA GPUs. Cloud Native Core is based on Ubuntu, Kubernetes, Helm and the NVIDIA GPU and Network Operator.
Browse