Diffusion Model Fine-Tuning with NeMo AutoModel#
Introduction#
Diffusion models generate images and videos by learning to reverse a noise process β starting from random noise and iteratively refining it into coherent visual output guided by a text prompt. Pretrained diffusion models (like FLUX.1-dev for images or Wan 2.1 for video) produce impressive general-purpose results, but they know nothing about your particular visual domain, style, or subject matter. Fine-tuning bridges that gap β you adapt the model on your own data so it produces outputs that match your requirements, without the cost of training from scratch.
Under the hood, NeMo AutoModel uses flow matching, a modern generative framework that learns to transform noise into data by regressing a velocity field along straight interpolation paths. It integrates with Hugging Face Diffusers to provide distributed fine-tuning for text-to-image and text-to-video models. This guide walks you through the process end-to-end β from installation through training and inference β using Wan 2.1 T2V 1.3B as a running example.
Workflow Overview#
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β 1. Install β--->β 2. Prepare β--->β 3. Configure β--->β 4. Train β--->β 5. Generate β
β β β Data β β β β β β β
β pip install β β Encode to β β YAML recipe β β torchrun β β Run inferenceβ
β or Docker β β .meta files β β β β β β with ckpt β
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
Step |
Section |
What You Do |
|---|---|---|
1. Install |
Install the package via pip or Docker |
|
2. Prepare Data |
Encode raw images/videos into |
|
3. Configure |
Write a YAML config specifying model, data, and training settings |
|
4. Train |
Launch training with |
|
4b. Multi-Node |
Scale training across multiple nodes |
|
5. Generate |
Run inference using the fine-tuned checkpoint |
For model-specific configuration (FLUX.1-dev, HunyuanVideo), see Model-Specific Notes.
Supported Models#
Model |
HF Model ID |
Task |
Parameters |
Example Config |
|---|---|---|---|---|
Wan 2.1 T2V 1.3B |
|
Text-to-Video |
1.3B |
|
FLUX.1-dev |
|
Text-to-Image |
12B |
|
HunyuanVideo 1.5 |
|
Text-to-Video |
β |
All models use FSDP2 for distributed training and flow matching for loss computation.
Install NeMo AutoModel#
pip3 install nemo-automodel
Alternatively, if you run into dependency or driver issues, use the pre-built Docker container:
docker pull nvcr.io/nvidia/nemo-automodel:26.02.00
docker run --gpus all -it --rm --shm-size=8g nvcr.io/nvidia/nemo-automodel:26.02.00
Important
Docker users: Checkpoints are lost when the container exits unless you bind-mount the checkpoint directory to the host. See Install with NeMo Docker Container and Saving Checkpoints When Using Docker.
For the full set of installation methods, see the installation guide.
Prepare Your Dataset#
Diffusion models operate in latent space β a compressed representation of visual data β rather than directly on raw images or videos. To avoid re-encoding data on every training step, the preprocessing pipeline encodes all inputs ahead of time and saves them as .meta files.
Each .meta file contains:
Latent representations produced by a VAE (Variational Autoencoder) from the raw visual data
Text embeddings produced by a text encoder from the associated captions/prompts
Fine-tuning then operates entirely on these pre-encoded .meta files, which is significantly faster than encoding on the fly.
Preprocess your data using the built-in tool at tools/diffusion/preprocessing_multiprocess.py. The script provides image and video subcommands:
Video preprocessing (using Wan 2.1 as a running example):
python -m tools.diffusion.preprocessing_multiprocess video \
--video_dir /data/videos \
--output_dir /cache \
--processor wan \
--resolution_preset 512p \
--caption_format sidecar
Image preprocessing (FLUX):
python -m tools.diffusion.preprocessing_multiprocess image \
--image_dir /data/images \
--output_dir /cache \
--processor flux
Video preprocessing (HunyuanVideo):
python -m tools.diffusion.preprocessing_multiprocess video \
--video_dir /data/videos \
--output_dir /cache \
--processor hunyuan \
--target_frames 121 \
--caption_format meta_json
For the full set of arguments and input format details, see the Diffusion Dataset Preparation guide.
Configure Your Training Recipe#
Fine-tuning is driven by two components:
A recipe script (e.g.,
train.py) β the Python entry point that orchestrates the training loop: loading the model, building the dataloader, running forward/backward passes, computing the flow matching loss, checkpointing, and logging.A YAML configuration file β a text file in YAML format that specifies all settings the recipe uses: which model to fine-tune, where the data lives, optimizer hyperparameters, parallelism strategy, etc. You customize training by editing this file rather than modifying code, allowing you to scale from 1 to 100s of GPUs seamlessly.
Below is the annotated wan2_1_t2v_flow.yaml, with each section explained:
seed: 42
# Weights & Biases experiment tracking
wandb:
project: wan-t2v-flow-matching
mode: online
name: wan2_1_t2v_fm_v2
dist_env:
backend: nccl
timeout_minutes: 30
# Model configuration
# pretrained_model_name_or_path: Hugging Face model ID
# mode: "finetune" loads pretrained weights and adapts them to your dataset
model:
pretrained_model_name_or_path: Wan-AI/Wan2.1-T2V-1.3B-Diffusers
mode: finetune
# Training schedule
step_scheduler:
global_batch_size: 8 # Effective batch size across all GPUs
local_batch_size: 1 # Per-GPU batch size (gradient accumulation = global/local/num_gpus)
ckpt_every_steps: 1000 # Checkpoint frequency
num_epochs: 100
log_every: 2 # Log metrics every N steps
# Data: uses pre-encoded .meta files
data:
dataloader:
_target_: nemo_automodel.components.datasets.diffusion.build_video_multiresolution_dataloader
cache_dir: PATH_TO_YOUR_DATA
model_type: wan # "wan" for Wan 2.1, "hunyuan" for HunyuanVideo
base_resolution: [512, 512]
dynamic_batch_size: false
shuffle: true
drop_last: false
num_workers: 0
# Optimizer
optim:
learning_rate: 5e-6
optimizer:
weight_decay: 0.01
betas: [0.9, 0.999]
# Learning rate scheduler
lr_scheduler:
lr_decay_style: cosine
lr_warmup_steps: 0
min_lr: 1e-6
# Flow matching configuration
flow_matching:
adapter_type: "simple" # Model-specific adapter (simple, flux, hunyuan)
adapter_kwargs: {}
timestep_sampling: "uniform" # How timesteps are sampled during training
logit_mean: 0.0
logit_std: 1.0
flow_shift: 3.0 # Shifts the flow schedule
mix_uniform_ratio: 0.1
sigma_min: 0.0
sigma_max: 1.0
num_train_timesteps: 1000
i2v_prob: 0.3 # Probability of image-to-video conditioning
use_loss_weighting: true
log_interval: 100
summary_log_interval: 10
# FSDP2 distributed training
fsdp:
tp_size: 1 # Tensor parallelism
cp_size: 1 # Context parallelism
pp_size: 1 # Pipeline parallelism
dp_replicate_size: 1
dp_size: 8 # Data parallelism (number of GPUs)
# Checkpointing
checkpoint:
enabled: true
checkpoint_dir: PATH_TO_YOUR_CKPT_DIR
model_save_format: torch_save
save_consolidated: false
restore_from: null
Config Field Reference#
Section |
Required? |
What to Change |
|---|---|---|
|
Yes |
Set |
|
Yes |
|
|
Yes |
Set |
|
Yes |
|
|
Yes |
|
|
Yes |
Set |
|
Recommended |
Set |
|
Optional |
Configure to enable Weights & Biases logging. |
Fine-Tune the Model#
Launch fine-tuning with torchrun:
torchrun --nproc-per-node=8 \
examples/diffusion/finetune/finetune.py \
-c examples/diffusion/finetune/wan2_1_t2v_flow.yaml
Adjust --nproc-per-node to match the number of GPUs on your node, and ensure fsdp.dp_size in the YAML matches.
Multi-Node Training#
When a single node doesnβt provide enough GPUs or memory for your workload, you can scale training across multiple nodes. NeMo AutoModel handles multi-node distributed training through torchrun rendezvous and FSDP2 β the same recipe script works on one node or many.
YAML Configuration Changes#
The main change is in the fsdp section. Set dp_size to the total number of GPUs across all nodes, and optionally increase dp_replicate_size for gradient replication across nodes.
For example, to train on 2 nodes with 8 GPUs each (16 GPUs total):
fsdp:
tp_size: 1
cp_size: 1
pp_size: 1
dp_replicate_size: 2 # Replicate across 2 nodes for robustness
dp_size: 16 # Total GPUs: 2 nodes Γ 8 GPUs
A complete multi-node config is provided at wan2_1_t2v_flow_multinode.yaml.
Launch with torchrun#
Run the following command on each node, setting NODE_RANK to 0 on the first node, 1 on the second, and so on:
export MASTER_ADDR=node0.hostname # hostname or IP of the first node
export MASTER_PORT=29500
export NODE_RANK=0 # 0 on master, 1 on second node, etc.
torchrun \
--nnodes=2 \
--nproc-per-node=8 \
--node_rank=${NODE_RANK} \
--rdzv_backend=c10d \
--rdzv_endpoint=${MASTER_ADDR}:${MASTER_PORT} \
examples/diffusion/finetune/finetune.py \
-c examples/diffusion/finetune/wan2_1_t2v_flow_multinode.yaml
Model-Specific Notes#
Use the table below to pick the right model for your use case:
Use Case |
Model |
Why Choose It |
|---|---|---|
Video generation on limited hardware |
Smallest model (1.3B params) β fast iteration, fits on a single A100 40GB |
|
High-quality image generation |
State-of-the-art text-to-image with 12B params and guidance-based control |
|
High-quality video generation |
Larger video model with condition-latent support for richer motion and detail |
Wan 2.1 T2V 1.3B#
Adapter type:
simpleDataloader:
build_video_multiresolution_dataloaderwithmodel_type: wanConfig: wan2_1_t2v_flow.yaml
FLUX.1-dev (Text-to-Image)#
Adapter type:
fluxDataloader:
build_text_to_image_multiresolution_dataloaderKey differences:
Uses
pipeline_specto specify the transformer architecture:model: pipeline_spec: transformer_cls: "FluxTransformer2DModel" subfolder: "transformer" load_full_pipeline: false
Requires
guidance_scalein adapter kwargs:flow_matching: adapter_type: "flux" adapter_kwargs: guidance_scale: 3.5 use_guidance_embeds: true
Uses
logit_normaltimestep sampling instead ofuniform
Config: flux_t2i_flow.yaml
HunyuanVideo 1.5#
Adapter type:
hunyuanDataloader:
build_video_multiresolution_dataloaderwithmodel_type: hunyuanKey differences:
Requires
activation_checkpointing: truein FSDP config due to model sizeUses condition latents in adapter kwargs:
flow_matching: adapter_type: "hunyuan" adapter_kwargs: use_condition_latents: true default_image_embed_shape: [729, 1152]
Uses
logit_normaltimestep sampling
Config: hunyuan_t2v_flow.yaml
Generation / Inference#
Once training is complete, you can use the model to generate images or videos from text prompts. This step is called inference β as opposed to training, where the model learns from data, inference is where it produces new outputs.
In diffusion models, generation works by starting from random noise and iteratively denoising it, guided by your text prompt, until a clean image or video emerges.
The generation script (generate.py) handles this: it loads your model weights (pretrained or fine-tuned), configures the diffusion sampler, and produces outputs for one or more prompts.
Single-GPU (Wan 2.1 1.3B):
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_wan.yaml
Multi-GPU (Wan 2.1 1.3B):
Wan 2.1 supports tensor parallelism for inference, which shards the transformer across GPUs to reduce per-GPU memory. Pass the distributed config via CLI overrides:
torchrun --nproc-per-node=8 \
examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_wan.yaml \
--distributed.backend nccl \
--distributed.parallel_scheme.transformer.tp_size 8
With a fine-tuned checkpoint:
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_wan.yaml \
--model.checkpoint ./checkpoints/step_1000 \
--inference.prompts '["A dog running on a beach"]'
FLUX image generation:
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_flux.yaml
HunyuanVideo:
python examples/diffusion/generate/generate.py \
-c examples/diffusion/generate/configs/generate_hunyuan.yaml
Available Generation Configs#
Config |
Model |
Output |
GPUs |
|---|---|---|---|
Wan 2.1 1.3B |
Video |
1 |
|
FLUX.1-dev |
Image |
1 |
|
HunyuanVideo |
Video |
1 |
Note
You can use --model.checkpoint ./checkpoints/LATEST to automatically load the most recent checkpoint.
Hardware Requirements#
Component |
Minimum |
Recommended |
|---|---|---|
GPU |
A100 40GB |
A100 80GB / H100 |
GPUs |
4 |
8 |
RAM |
128 GB |
256 GB+ |
Storage |
500 GB SSD |
2 TB NVMe |