Large Language Models (LLMs)#
Introduction#
Large Language Models (LLMs) power a variety of tasks such as dialogue systems, text classification, summarization, and more. NeMo AutoModel provides a simple interface for loading and fine-tuning LLMs hosted on the Hugging Face Hub.
Run LLMs with NeMo AutoModel#
To run LLMs with NeMo AutoModel, make sure you’re using NeMo container version 25.11.00 or later. If the model you intend to fine-tune requires a newer version of Transformers, you may need to upgrade to the latest version of NeMo AutoModel by using:
pip3 install --upgrade git+git@github.com:NVIDIA-NeMo/AutoModel.git
For other installation options (e.g., uv), please see our Installation Guide.
Supported Models#
NeMo AutoModel supports the AutoModelForCausalLM in the Text Generation category. During preprocessing, it uses transformers.AutoTokenizer, which is sufficient for most LLM cases. If your model requires custom text handling (for example, a custom chat template), override the tokenizer in your recipe YAML (for example, set dataset.tokenizer / validation_dataset.tokenizer) or provide a custom dataset _target_. See LLM datasets and dataset overview.
The table below lists the main architectures we test against (FSDP2 combined with SFT/PEFT) and includes a representative checkpoint for each.
Architecture |
Models |
Example HF Models |
|---|---|---|
|
Aquila, Aquila2 |
|
|
Baichuan2, Baichuan |
|
|
Bamba |
|
|
ChatGLM |
|
|
Command‑R |
|
|
DeciLM |
|
|
DeepSeek |
|
|
DeepSeek V3, DeepSeek V3.2 |
|
|
EXAONE‑3 |
|
|
Falcon |
|
|
Gemma |
|
|
Gemma 2 |
|
|
Gemma 3 |
|
|
GLM‑4 |
|
|
GLM‑4‑0414 |
|
|
GLM‑4‑MoE |
|
|
StarCoder, SantaCoder, WizardCoder |
|
|
GPT‑J |
|
|
GPT‑NeoX, Pythia, OpenAssistant, Dolly V2, StableLM |
|
|
GPT-OSS |
|
|
Granite 3.0, Granite 3.1, PowerLM |
|
|
Granite 3.0 MoE, PowerMoE |
|
|
Granite MoE Shared |
|
|
GritLM |
|
|
InternLM |
|
|
InternLM2 |
|
|
InternLM3 |
|
|
Jais |
|
|
Llama 3.1, Llama 3, Llama 2, LLaMA, Yi |
|
|
MiniCPM |
|
|
MiniCPM3 |
|
|
Mistral, Mistral‑Instruct |
|
|
Mixtral‑8x7B, Mixtral‑8x7B‑Instruct |
|
|
Nemotron‑3, Nemotron‑4, Minitron |
|
|
Nemotron-Nano-{9B,12B} |
|
|
Nemotron-3-Nano-30B-A3B-BF16 |
|
|
OLMo |
|
|
OLMo2 |
|
|
OLMoE |
|
|
Orion |
|
|
Phi |
|
|
Phi‑4, Phi‑3 |
|
|
Phi‑3‑Small |
|
|
QwQ, Qwen2 |
|
|
Qwen2MoE |
|
|
Qwen3 |
|
|
Qwen3MoE |
|
|
Qwen3‑Next |
|
|
Step‑3.5 |
|
|
StableLM |
|
|
Starcoder2 |
|
|
Solar Pro |
|
|
Ministral3 3B, 8B, 14B |
|
|
Devstral-Small-2-24B |
|
Fine-Tuning LLMs with NeMo AutoModel#
The models listed above can be fine-tuned using NeMo AutoModel to adapt them to specific tasks or domains. We support two primary fine-tuning approaches:
Parameter-Efficient Fine-Tuning (PEFT): Updates only a small subset of parameters (typically <1%) using techniques like Low-Rank Adaptation (LoRA). This is ideal for resource-constrained environments.
Supervised Fine-Tuning (SFT): Updates all or most model parameters for deeper adaptation, suitable for high-precision applications.
Please see our Fine-Tuning Guide to learn how you can apply both of these fine-tuning methods to your data.
Tip
In these guides, we use the SQuAD v1.1 dataset for demonstration purposes, but you can use your own data.
To do so, update the recipe YAML dataset / validation_dataset sections (for example dataset._target_, dataset_name/path_or_dataset, and split). See LLM datasets and dataset overview.