Model Catalog#
Explore the model families and sizes supported by the NVIDIA NeMo Customizer microservice.
Tip
For specific values required to create customization targets, refer to the customization target value reference guide.
Model Families#
View the available Llama models from Meta, ranging from 8 billion to 70 billion parameters.
View the available Llama Nemotron models from NVIDIA, including Nano and Super variants for efficient and advanced instruction tuning.
View the available Phi models from Microsoft, designed for strong reasoning capabilities with efficient deployment.
Support Matrix#
The support matrices show the list of supported models and the features available for each model. For detailed technical specifications of each model (architecture, parameters, token limits, etc.), please refer to the previously listed model family pages.
Large Language Models#
All of the following models in the table support both chat and completion model training.
Model |
Train a Chat Model with Tool Calling |
Fine-tuning Options |
Sequence Packing[1] |
Inference with NIM |
Reasoning |
---|---|---|---|---|---|
Yes |
LoRA |
No |
Supported (unverified) |
No |
|
Yes |
SFT, LoRA |
Yes |
Supported (unverified) |
No |
|
Yes |
SFT, LoRA |
Yes |
Supported |
No |
|
Yes |
LoRA |
Yes |
Supported (unverified) |
No |
|
Yes |
SFT, LoRA |
Yes |
Supported |
No |
|
Yes |
LoRA |
Yes |
Supported (unverified) |
No |
|
No |
LoRA, All Weights |
No |
Supported |
Yes |
|
No |
LoRA |
No |
Supported |
Yes |
|
No |
SFT, LoRA |
No |
Not supported |
No |
Embedding Models#
Model |
Fine-tuning Options |
Inference with NIM |
---|---|---|
SFT |
Not supported |