Mixtral#
Mixtral is Mistral AI’s Mixture-of-Experts model series. Each token is processed by a subset of experts, enabling a large total parameter count with efficient per-token compute.
Task |
Text Generation (MoE) |
Architecture |
|
Parameters |
47B total / 13B active |
HF Org |
Available Models#
Mixtral-8x7B: 8 experts, 2 active per token (~13B active)
Mixtral-8x7B-Instruct: instruction-tuned variant
Mixtral-8x22B: 8 experts, 2 active per token (~39B active)
Architecture#
MixtralForCausalLM
Example HF Models#
Model |
HF ID |
|---|---|
Mixtral 8x7B v0.1 |
|
Mixtral 8x7B Instruct v0.1 |
Example Recipes#
Recipe |
Description |
|---|---|
SFT — Mixtral 8x7B on SQuAD |
|
LoRA — Mixtral 8x7B on SQuAD |
Try with NeMo AutoModel#
1. Install (full instructions):
pip install nemo-automodel
2. Clone the repo to get the example recipes:
git clone https://github.com/NVIDIA-NeMo/Automodel.git
cd Automodel
3. Run the recipe from inside the repo:
automodel --nproc-per-node=8 examples/llm_finetune/mistral/mixtral-8x7b-v0-1_squad.yaml
Run with Docker
1. Pull the container and mount a checkpoint directory:
docker run --gpus all -it --rm \
--shm-size=8g \
-v $(pwd)/checkpoints:/opt/Automodel/checkpoints \
nvcr.io/nvidia/nemo-automodel:26.02.00
2. Navigate to the AutoModel directory (where the recipes are):
cd /opt/Automodel
3. Run the recipe:
automodel --nproc-per-node=8 examples/llm_finetune/mistral/mixtral-8x7b-v0-1_squad.yaml
See the Installation Guide and LLM Fine-Tuning Guide.
Fine-Tuning#
See the LLM Fine-Tuning Guide and the Large MoE Fine-Tuning Guide.