Enterprise-Grade AI Software Platform

NVIDIA AI Enterprise is an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade co-pilots and other generative AI applications. Easy-to-use microservices provide optimized model performance with enterprise-grade security, support, and stability to ensure a smooth transition from prototype to production for enterprises that run their businesses on AI.

platform-overview-01.png

NVIDIA AI Enterprise is tightly integrated with accelerated platforms to speed up AI workloads through software optimization. This not only improves efficiency and performance, but also reduces energy, footprint, and investment in the data center, contributing to more sustainable computing and save cost on time to production.

Enterprise-grade security, stability, manageability, and support

As AI rapidly evolves and expands, the complexity of the software stack and its dependencies grows. NVIDIA AI Enterprise is designed for running mission critical AI that businesses run on by offering regular releases of security patches for critical and common vulnerabilities and exposures (CVEs), production branch for API stability, end-to-end management software, and enterprise support with service-level agreements (SLAs).

Cloud native and certified to run everywhere

NVIDIA AI Enterprise is optimized and certified to ensure reliable performance running AI in the public cloud, virtualized data centers, or on the DGX platform. This provides the flexibility to develop applications once and deploy anywhere, reducing the risk associated with moving from pilot to production caused by infrastructure and architectural differences between environments.

NVIDIA AI application frameworks, NVIDIA pretrained models and all other NVIDIA AI software available on NGC is supported with an NVIDIA AI Enterprise license. With 100+ AI frameworks and pretrained models including, NeMo, Maxine, cuOpt and more to be added, look for the “NVIDIA AI Enterprise Supported” label on NGC.

Organizations start their AI journey by using the open, freely available NGC libraries and frameworks to experiment and pilot. Now, when they’re ready to move from pilot to production, enterprises can easily transition to a fully managed and secure AI platform with an NVIDIA AI Enterprise subscription. This gives enterprises deploying business critical AI, the assurance of business continuity with NVIDIA Enterprise Support and access to NVIDIA AI experts.

NVIDIA AI Enterprise includes new AI solution workflows for building AI applications including contact center intelligent virtual assistants, audio transcription, and cybersecurity digital fingerprinting to detect anomalies. These packaged AI workflow examples include NVIDIA AI frameworks and pretrained models, as well as resources such as Helm Charts, Jupyter Notebooks, and documentation to help customers more easily get started in building AI-based solutions. NVIDIA’s cloud-native AI workflows run as microservices that can be deployed on Kubernetes alone or with other microservices to create production-ready applications.

platform-overview-02.png

Key Benefits:

  • Reduce development time, at a lower cost

  • Improve accuracy and performance

  • Confidence in outcome, by leveraging NVIDIA expertise

Pretrained AI models make high performing AI development easy, quick, and accessible by eliminating the need of building models from scratch. NVIDIA AI Enterprise includes pretrained models without encryption for healthcare and vision AI tasks such as people detection, vehicle detection, federated learning model, image registration and more. By accessing these pretrained models, developers can view the weights and biases of the model which can help in model explainability and understand model bias. In addition, unencrypted models are easier to debug and integrate into custom AI apps.

NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed to accelerate deployment of generative AI across your enterprise. This versatile runtime supports a broad spectrum of AI models—from open-source community models to NVIDIA AI Foundation models, as well as custom AI models. Leveraging industry standard APIs, developers can quickly build enterprise-grade AI applications with just a few lines of code. Built on the robust foundations including inference engines like Triton Inference Server, TensorRT, TensorRT-LLM, and PyTorch, NIM is engineered to facilitate seamless AI inferencing at scale, ensuring that you can deploy AI applications anywhere with confidence. Whether on-premises or in the cloud, NIM is the fastest way to achieve accelerated generative AI inference at scale.

NVIDIA AI Enterprise is certified to run across public cloud, data centers, workstations, DGX platform to edge. A complete list of supported configurations is listed in the  NVIDIA AI Enterprise Product Support Matrix.

NVIDIA Enterprise Support and Services Guide provides information for using NVIDIA Enterprise Support and services. This document is intended for NVIDIA’s potential and existing enterprise customers. This User Guide is a non-binding document and should be utilized to obtain information for NVIDIA Enterprise branded support and services.

Use the NPN Partner finder for partner and OEM (Original Equipment Manufacturer) support.

Use the Consumer Support webpage for NVIDIA Consumer Support.

Previous NVIDIA AI Enterprise Solution Guide
Next Overview
© Copyright 2024, NVIDIA. Last updated on Apr 2, 2024.