Hy3 (HunyuanLarge)#
Hy3-preview is a 295B Mixture-of-Experts language model from Tencent. It features 80 transformer layers (layer 0 dense, layers 1–79 MoE), 192 routed experts plus 1 shared expert with top-8 sigmoid routing, Grouped Query Attention (64 Q / 8 KV heads), per-head QK RMSNorm, RoPE, and an e_score_correction_bias gate buffer for expert-load correction. It supports a 256K context window.
Task |
Text Generation (MoE) |
Architecture |
|
Parameters |
295B total |
HF Org |
Available Models#
Hy3-preview: 295B total, top-8 routed experts activated per token
Architectures#
HYV3ForCausalLM
Example HF Models#
Model |
HF ID |
|---|---|
Hy3-preview |
Example Recipes#
Recipe |
Description |
|---|---|
SFT — Hy3-preview with DeepEP |
Try with NeMo AutoModel#
1. Install (NeMo AutoModel):
pip install nemo-automodel
2. Clone the repo to get the example recipes:
git clone https://github.com/NVIDIA-NeMo/Automodel.git
cd Automodel
3. Run the recipe from inside the repo:
automodel --nproc-per-node=8 examples/llm_finetune/hy_v3/hy3_preview_deepep.yaml
See the NeMo AutoModel Installation Guide and LLM Fine-Tuning Guide.
Fine-Tuning#
See the LLM Fine-Tuning Guide and the Large MoE Fine-Tuning Guide.