About Get Started#
Use the following resources to set up your environment, pull the container, and run your first model.
Try a Hosted Model#
Before deploying a model locally, try a NIM API (for example, Nemotron-3-Content-Safety) on build.nvidia.com to get a feel for the API and model capabilities.
Next Steps#
Hardware, software, driver, and container runtime requirements for deploying NIM VLM containers.
how-to
Instructions for configuring network and authentication, logging into Docker, and pulling the NIM container image.
how-to
Configure your local cache and advanced settings to customize your NIM VLM deployment.
how-to
Step-by-step instructions for deploying model-specific and model-free NIM containers and running inference.
how-to
Supported models, profiles, and hardware platforms for the latest release.
reference
New features, bug fixes, and known issues for the current release.
reference