About Get Started with NIM LLM#
Use the following resources to set up your environment, pull the container, and run your first model.
Try a Hosted Model#
Before deploying a model locally, try a NIM API (such as llama-3.3-70b-instruct) at build.nvidia.com to get a feel for the API and model capabilities.
Next Steps#
Hardware, software, driver, and container runtime requirements for deploying NIM LLM containers.
Instructions for configuring network and authentication, logging into Docker, and pulling the NIM container image.
Configure your local cache and advanced settings to customize your NIM LLM deployment.
Step-by-step instructions for deploying model-specific and model-free NIM containers and running inference.
Supported models, profiles, and hardware platforms for the latest release.
New features, bug fixes, and known issues for the current release.