Welcome to the trial of NVIDIA AI Enterprise on NVIDIA LaunchPad.
NVIDIA AI Enterprise is a software suite that enables organizations to harness the power of AI, even if they don’t have AI expertise today. Optimized to streamline AI development and deployment, NVIDIA AI Enterprise includes proven, open-sourced containers and frameworks which are certified to run on common data center platforms from VMware and Red Hat, NVIDIA Certified servers configured with GPUs or CPU only, and on the public cloud. Since support is included, organizations get the transparency of open source and assurance that the global NVIDIA Enterprise Support team will help AI projects stay on track.
Every step of the AI workflow is streamlined, from data prep, training, inference to deployment and AI practitioners can train on complex neural network models as well as tree-based models. NVIDIA AI Enterprise includes proven, containers and frameworks that ease adoption of enterprise AI such as conversational AI often used for automated customer support and digital sales agents and computer vision used for segmentation, classification and detection.
In this lab, we show you how an AI practitioner can deploy a Training + Inference workload on Openshift using a helm chart (discussed in the next section). You will run through an end-to-end data science workflow on Openshift that uses TensorFlow to train an image classification model and then uses NVIDIA’s Triton Inference Server to deploy the model for inference to production. You will use a client application to issue inference requests to the Triton Inference server. Stanford Online Products dataset will be used to train the e-commerce image classification model; this dataset includes images available on your regular e-commerce websites.
This lab should take roughly 2-3 hours to complete. This includes the time to train model. The GPU Openshift environment is then also available to the user to deploy their own applications.