Nemotron-H#
NVIDIA Nemotron-H is a hybrid Mamba-2 / transformer architecture that interleaves selective state space layers with standard attention layers for improved efficiency on long sequences.
Task |
Text Generation |
Architecture |
|
Parameters |
9B – 30B |
HF Org |
Available Models#
NVIDIA-Nemotron-Nano-9B-v2: 9B hybrid model
NVIDIA-Nemotron-Nano-12B-v2: 12B hybrid model
NVIDIA-Nemotron-3-Nano-30B-A3B-BF16: 30B total, 3B activated (sparse MoE + Mamba-2)
Architecture#
NemotronHForCausalLM
Example HF Models#
Model |
HF ID |
|---|---|
Nemotron-Nano 9B v2 |
|
Nemotron-Nano 12B v2 |
|
Nemotron-3-Nano 30B A3B |
Example Recipes#
Recipe |
Description |
|---|---|
SFT — Nemotron-Nano 9B on SQuAD |
|
LoRA — Nemotron-Nano 9B on SQuAD |
|
SFT — Nemotron-3-Nano 30B on HellaSwag |
|
LoRA — Nemotron-3-Nano 30B on HellaSwag |
Try with NeMo AutoModel#
1. Install (full instructions):
pip install nemo-automodel
2. Clone the repo to get the example recipes:
git clone https://github.com/NVIDIA-NeMo/Automodel.git
cd Automodel
3. Run the recipe from inside the repo:
automodel --nproc-per-node=8 examples/llm_finetune/nemotron/nemotron_nano_9b_squad.yaml
Run with Docker
1. Pull the container and mount a checkpoint directory:
docker run --gpus all -it --rm \
--shm-size=8g \
-v $(pwd)/checkpoints:/opt/Automodel/checkpoints \
nvcr.io/nvidia/nemo-automodel:26.02.00
2. Navigate to the AutoModel directory (where the recipes are):
cd /opt/Automodel
3. Run the recipe:
automodel --nproc-per-node=8 examples/llm_finetune/nemotron/nemotron_nano_9b_squad.yaml
See the Installation Guide and LLM Fine-Tuning Guide.
Fine-Tuning#
See the LLM Fine-Tuning Guide.