Granite MoE#

IBM Granite MoE models extend the Granite architecture with Mixture-of-Experts layers for more efficient scaling. PowerMoE (IBM Research) also uses this architecture.

Task

Text Generation (MoE)

Architecture

GraniteMoeForCausalLM

Parameters

1B – 3B

HF Org

ibm-granite

Available Models#

  • Granite 3.0 1B A400M Base — 1B total, 400M activated

  • Granite 3.0 3B A800M Instruct — 3B total, 800M activated

  • PowerMoE-3B (IBM Research) — 3B total

  • MoE-7B-1B-Active-Shared-Experts (IBM Research, test model)

Architectures#

  • GraniteMoeForCausalLM

  • GraniteMoeSharedForCausalLM — variant with shared experts

Example HF Models#

Model

HF ID

Granite 3.0 1B A400M Base

ibm-granite/granite-3.0-1b-a400m-base

Granite 3.0 3B A800M Instruct

ibm-granite/granite-3.0-3b-a800m-instruct

PowerMoE 3B

ibm/PowerMoE-3b

Try with NeMo AutoModel#

Install NeMo AutoModel and follow the fine-tuning guide to configure a recipe for this model.

1. Install (full instructions):

pip install nemo-automodel

2. Clone the repo to get example recipes you can adapt:

git clone https://github.com/NVIDIA-NeMo/Automodel.git
cd Automodel

3. Fine-tune by adapting a base LLM recipe — override the model ID on the CLI:

automodel --nproc-per-node=8 examples/llm_finetune/llama3_2/llama3_2_1b_squad.yaml \
  --model.pretrained_model_name_or_path <MODEL_HF_ID>

Replace <MODEL_HF_ID> with the model ID from Example HF Models above.

Run with Docker

1. Pull the container and mount a checkpoint directory:

docker run --gpus all -it --rm \
  --shm-size=8g \
  -v $(pwd)/checkpoints:/opt/Automodel/checkpoints \
  nvcr.io/nvidia/nemo-automodel:26.02.00

2. The recipes are at /opt/Automodel/examples/ — navigate there:

cd /opt/Automodel

3. Fine-tune:

automodel --nproc-per-node=8 examples/llm_finetune/llama3_2/llama3_2_1b_squad.yaml \
  --model.pretrained_model_name_or_path <MODEL_HF_ID>

See the Installation Guide and LLM Fine-Tuning Guide.

Fine-Tuning#

See the LLM Fine-Tuning Guide.

Hugging Face Model Cards#