Qwen3-VL / Qwen3-VL-MoE#
Qwen3-VL is Alibaba Cloud’s third-generation vision language model series. The MoE variant activates a fraction of parameters per token for efficient large-scale inference.
Task |
Image-Text-to-Text |
Architecture |
|
Parameters |
4B – 235B |
HF Org |
Available Models#
Qwen3-VL-8B-Instruct: 8B
Qwen3-VL-4B-Instruct: 4B
Qwen3-VL-MoE-30B: 30B total (MoE)
Qwen3-VL-MoE-235B: 235B total (MoE)
Architecture#
Qwen3VLForConditionalGeneration
Example HF Models#
Model |
HF ID |
|---|---|
Qwen3-VL 4B Instruct |
|
Qwen3-VL 8B Instruct |
Example Recipes#
Recipe |
Dataset |
Description |
|---|---|---|
rdr-items |
SFT — Qwen3-VL 4B on RDR Items |
|
rdr-items |
SFT — Qwen3-VL 8B on RDR Items |
|
MedPix-VQA |
SFT — Qwen3-VL-MoE 30B with TE + DeepEP |
|
MedPix-VQA |
SFT — Qwen3-VL-MoE 235B |
Try with NeMo AutoModel#
1. Install (full instructions):
pip install nemo-automodel
2. Clone the repo to get the example recipes:
git clone https://github.com/NVIDIA-NeMo/Automodel.git
cd Automodel
3. Run the recipe from inside the repo:
automodel --nproc-per-node=8 examples/vlm_finetune/qwen3/qwen3_vl_4b_instruct_rdr.yaml
Run with Docker
1. Pull the container and mount a checkpoint directory:
docker run --gpus all -it --rm \
--shm-size=8g \
-v $(pwd)/checkpoints:/opt/Automodel/checkpoints \
nvcr.io/nvidia/nemo-automodel:26.02.00
2. Navigate to the AutoModel directory (where the recipes are):
cd /opt/Automodel
3. Run the recipe:
automodel --nproc-per-node=8 examples/vlm_finetune/qwen3/qwen3_vl_4b_instruct_rdr.yaml
See the Installation Guide and VLM Fine-Tuning Guide.
Fine-Tuning#
See the VLM Fine-Tuning Guide.