Large Language Models (LLMs)#
Introduction#
Large Language Models (LLMs) power a variety of tasks such as dialogue systems, text classification, summarization, and more. NeMo AutoModel provides a simple interface for loading and fine-tuning LLMs hosted on the Hugging Face Hub.
Run LLMs with NeMo AutoModel#
To run LLMs with NeMo AutoModel, make sure you’re using NeMo container version 25.11.00 or later. If the model you intend to fine-tune requires a newer version of Transformers, you may need to upgrade to the latest version of NeMo AutoModel by using:
pip3 install --upgrade git+git@github.com:NVIDIA-NeMo/AutoModel.git
For other installation options (e.g., uv), please see our Installation Guide.
Supported Models#
NeMo AutoModel supports the AutoModelForCausalLM in the Text Generation category. During preprocessing, it uses transformers.AutoTokenizer, which is sufficient for most LLM cases. If your model requires custom text handling, override the tokenizer in your recipe YAML or provide a custom dataset _target_. See LLM datasets and dataset overview.
Owner |
Model Family |
Architectures |
|---|---|---|
Meta |
|
|
|
||
Qwen / Alibaba Cloud |
|
|
Qwen / Alibaba Cloud |
|
|
Qwen / Alibaba Cloud |
|
|
Qwen / Alibaba Cloud |
|
|
Qwen / Alibaba Cloud |
|
|
DeepSeek |
|
|
DeepSeek |
|
|
Mistral AI |
|
|
Mistral AI |
|
|
Mistral AI |
|
|
Microsoft |
|
|
Microsoft |
|
|
Microsoft |
|
|
NVIDIA |
|
|
NVIDIA |
|
|
NVIDIA |
|
|
NVIDIA |
|
|
THUDM / Zhipu AI |
|
|
THUDM / Zhipu AI |
|
|
THUDM / ZAI |
|
|
THUDM / ZAI |
|
|
IBM |
|
|
IBM |
|
|
IBM |
|
|
Allen AI |
|
|
Allen AI |
|
|
Allen AI |
|
|
OpenAI |
|
|
EleutherAI |
|
|
EleutherAI |
|
|
BigCode |
|
|
BigCode |
|
|
BAAI |
|
|
Baichuan Inc |
|
|
Cohere |
|
|
TII |
|
|
LG AI Research |
|
|
InternLM |
|
|
Inception AI |
|
|
MiniMax |
|
|
OpenBMB |
|
|
Moonshot AI |
|
|
ByteDance Seed |
|
|
Upstage |
|
|
OrionStar |
|
|
Stability AI |
|
|
Stepfun AI |
|
|
Parasail AI |
|
Fine-Tuning LLMs with NeMo AutoModel#
The models listed above can be fine-tuned using NeMo AutoModel. We support two primary fine-tuning approaches:
Parameter-Efficient Fine-Tuning (PEFT): Updates only a small subset of parameters (typically <1%) using techniques like Low-Rank Adaptation (LoRA).
Supervised Fine-Tuning (SFT): Updates all or most model parameters for deeper adaptation.
Please see our Fine-Tuning Guide to learn how to apply both methods to your data.
Tip
In these guides, we use the SQuAD v1.1 dataset for demonstration purposes, but you can use your own data. Update the recipe YAML dataset / validation_dataset sections accordingly. See LLM datasets and dataset overview.