Model Catalog#
Explore the model families and sizes supported by NVIDIA NeMo Customizer.
Tip
For information on setting up model entities for customization, see the Manage Model Entities guide. For fine-tuning and deployment tutorials, see the Tutorials guide.
Before You Start#
If downloading models hosted on Hugging Face, create a secret with your HuggingFace API key, then create a FileSet and Model Entity referencing the model. See Manage Model Entities for Customization for setup instructions.
Model Families#
View the available Llama models from Meta, ranging from 8 billion to 70 billion parameters.
View the available Llama Nemotron models from NVIDIA, including Nano and Super variants for efficient and advanced instruction tuning.
View the available Phi models from Microsoft, designed for strong reasoning capabilities with efficient deployment.
View the available embedding models optimized for retrieval and question-answering tasks.
View the available GPT-OSS models supported for customization.
View the available Qwen models from Alibaba Cloud, including compact variants for efficient customization.
View the available Mistral models, including Mistral and Ministral variants for instruction-following and reasoning tasks.
Tested Models#
The following table lists models that NVIDIA tested and their available features. While NeMo Customizer works with all LLM NIM microservices, the table lists the models that NVIDIA tested. Models available for fine-tuning with NeMo Customizer are not limited to those listed.
For detailed technical specifications of each model such as architecture, parameters, and token limits, refer to the model family pages.
Large Language Models#
The following models support both chat and completion model training.
Model |
Train a Chat Model with Tool Calling |
Fine-tuning Options |
Sequence Packing[1] |
Inference with NIM |
Reasoning |
|---|---|---|---|---|---|
Yes |
Full SFT, LoRA |
Yes |
Supported |
No |
|
Yes |
Full SFT, LoRA |
Yes |
Supported |
No |
|
Yes |
Full SFT, LoRA |
Yes |
Supported |
No |
|
No |
Full SFT, LoRA |
Yes |
Supported |
Yes |
|
No |
Full SFT, LoRA |
No |
Supported |
Yes |
|
No |
Full SFT, LoRA |
No |
Supported (only Full SFT) |
Yes |
|
No |
LoRA |
No |
Supported |
Yes |
|
No |
Full SFT, LoRA |
No |
Supported |
No |
|
Yes |
Full SFT, LoRA |
No |
Supported |
Yes |
|
No |
Full SFT, LoRA |
No |
Supported |
Yes |
|
No |
Full SFT, LoRA |
No |
Supported |
Yes |
|
No |
Full SFT, LoRA |
No |
Supported |
No |
|
No |
Full SFT, LoRA |
No |
No |
No |
|
No |
Full SFT, LoRA |
Yes |
No |
Yes |
Embedding Models#
Model |
Fine-tuning Options |
Inference with NIM |
|---|---|---|
Full SFT, LoRA (merged) |
Supported |
For detailed technical specifications and configuration information for embedding models, see the Embedding Models page.