NIM Offerings#
NVIDIA publishes inference microservices (NIMs) under distinct NIM offerings so you can choose the right balance of speed to publication, peak performance, and enterprise lifecycle guarantees. The three offerings are NIM Day 0, NIM Turbo, and NIM Certified.
NIM Day 0#
NIM Day 0 delivers NIMs that are validated to be functional on a small set of NVIDIA GPUs and published within about 72 hours (up to three days) of upstream model availability. It is free to use and is not part of the NVIDIA AI Enterprise portfolio.
Who it is for: Anyone who wants to try new models quickly. It fits early exploration for all customers.
For more information, refer to About Day 0 NIMs.
NIM Turbo#
NIM Turbo delivers validated best-in-class inference performance for top models on NVIDIA hardware. It is free for use in production deployments and is not part of the NVIDIA AI Enterprise portfolio. NIM Turbo is currently in early access and is not yet available for general use.
Who it is for: Any customer that needs best-in-class performance and does not need CVE SLAs, compliance packaging, or long-term vendor support from this tier.
NIM Certified#
NIM Certified is the enterprise production offering and supports broad compatibility across the NVIDIA hardware installed base, documented refresh cadence, CVE handling, rolling inference stack updates, and validation patterns aligned to NVIDIA AI Enterprise branch rules (Feature Branch and Production Branch). This is the offering for organizations that require predictable maintenance, compliance-oriented, and long-term operational expectations. It requires NVIDIA AI Enterprise.
Who it is for: Enterprise IT, regulated environments, global system integrators, and independent software vendors that need CVE SLAs, STIG/FIPS (where applicable for Production Branch), fixed-software baselines for PB cycles, and enterprise support via NVIDIA AI Enterprise.
For more information, refer to Enterprise-Grade Inference Software Stack.