Qwen3 MoE#

Qwen3 MoE is the Mixture-of-Experts variant of the Qwen3 series from Alibaba Cloud, activating a small fraction of parameters per token for efficient large-scale training.

Task

Text Generation (MoE)

Architecture

Qwen3MoeForCausalLM

Parameters

30B – 235B total

HF Org

Qwen

Available Models#

  • Qwen3-30B-A3B: 30B total parameters, 3B activated per token

  • Qwen3-235B-A22B: 235B total parameters, 22B activated per token

Architecture#

  • Qwen3MoeForCausalLM

Example HF Models#

Model

HF ID

Qwen3 30B A3B

Qwen/Qwen3-30B-A3B

Qwen3 235B A22B

Qwen/Qwen3-235B-A22B

Example Recipes#

Recipe

Description

qwen3_moe_30b_te_deepep.yaml

SFT — Qwen3 MoE 30B with TE + DeepEP

qwen3_moe_30b_lora.yaml

LoRA — Qwen3 MoE 30B

Try with NeMo AutoModel#

1. Install (full instructions):

pip install nemo-automodel

2. Clone the repo to get the example recipes:

git clone https://github.com/NVIDIA-NeMo/Automodel.git
cd Automodel

3. Run the recipe from inside the repo:

automodel --nproc-per-node=8 examples/llm_finetune/qwen/qwen3_moe_30b_te_deepep.yaml
Run with Docker

1. Pull the container and mount a checkpoint directory:

docker run --gpus all -it --rm \
  --shm-size=8g \
  -v $(pwd)/checkpoints:/opt/Automodel/checkpoints \
  nvcr.io/nvidia/nemo-automodel:26.02.00

2. Navigate to the AutoModel directory (where the recipes are):

cd /opt/Automodel

3. Run the recipe:

automodel --nproc-per-node=8 examples/llm_finetune/qwen/qwen3_moe_30b_te_deepep.yaml

See the Installation Guide and LLM Fine-Tuning Guide.

Fine-Tuning#

See the LLM Fine-Tuning Guide and the Large MoE Fine-Tuning Guide.

Hugging Face Model Cards#