bridge.models.deepseek.deepseek_provider
#
Module Contents#
Classes#
Base config for DeepSeek V2 and V3 models. |
|
DeepSeek-V2 Model: https://github.com/deepseek-ai/DeepSeek-V2 |
|
DeepSeek-V2-Lite Model: https://github.com/deepseek-ai/DeepSeek-V2 HuggingFace: https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite |
|
DeepSeek-V3 Model: https://github.com/deepseek-ai/DeepSeek-V3 |
|
Moonlight-16B-A3B Model: https://github.com/moonshotai/Moonlight-16B-A3B |
|
Deprecated alias for |
|
Deprecated alias for |
|
Deprecated alias for |
|
Deprecated alias for |
|
Deprecated alias for |
Functions#
API#
- class bridge.models.deepseek.deepseek_provider.DeepSeekModelProvider#
Bases:
megatron.bridge.models.transformer_config.MLATransformerConfig
,megatron.bridge.models.gpt_provider.GPTModelProvider
Base config for DeepSeek V2 and V3 models.
- transformer_layer_spec: Union[megatron.core.transformer.ModuleSpec, Callable[[megatron.bridge.models.gpt_provider.GPTModelProvider], megatron.core.transformer.ModuleSpec]]#
‘partial(…)’
- normalization: str#
‘RMSNorm’
- activation_func: Callable#
None
- gated_linear_unit: bool#
True
- position_embedding_type: str#
‘rope’
- add_bias_linear: bool#
False
False
- num_attention_heads: int#
128
- kv_channels: int#
128
- max_position_embeddings: int#
4096
- seq_length: int#
4096
- rotary_base: float#
10000.0
- make_vocab_size_divisible_by: int#
3200
- mtp_num_layers: Optional[int]#
None
- mtp_loss_scaling_factor: Optional[float]#
None
- attention_dropout: float#
0.0
0.0
- qk_layernorm: bool#
True
- moe_grouped_gemm: bool#
True
- moe_router_pre_softmax: bool#
True
- moe_token_dispatcher_type: str#
‘alltoall’
- moe_router_load_balancing_type: str#
‘seq_aux_loss’
True
- moe_router_dtype: Optional[str]#
‘fp32’
- q_lora_rank: int#
1536
- kv_lora_rank: int#
512
- qk_head_dim: int#
128
- qk_pos_emb_head_dim: int#
64
- v_head_dim: int#
128
- rotary_scaling_factor: float#
40
- mscale: float#
1.0
- mscale_all_dim: float#
1.0
- init_method_std: float#
0.006
- layernorm_epsilon: float#
1e-06
- bf16: bool#
True
- params_dtype: torch.dtype#
None
- async_tensor_model_parallel_allreduce: bool#
True
- attention_softmax_in_fp32: bool#
False
- persist_layer_norm: bool#
True
- num_layers_in_first_pipeline_stage: Optional[int]#
None
- num_layers_in_last_pipeline_stage: Optional[int]#
None
- account_for_embedding_in_pipeline_split: bool#
False
- account_for_loss_in_pipeline_split: bool#
False
- multi_latent_attention: bool#
True
- apply_rope_fusion: bool#
False
- bias_activation_fusion: bool#
True
- bias_dropout_fusion: bool#
True
- masked_softmax_fusion: bool#
True
- cross_entropy_loss_fusion: bool#
True
- cross_entropy_fusion_impl: str#
‘te’
- moe_permute_fusion: bool#
None
- class bridge.models.deepseek.deepseek_provider.DeepSeekV2ModelProvider#
Bases:
bridge.models.deepseek.deepseek_provider.DeepSeekModelProvider
DeepSeek-V2 Model: https://github.com/deepseek-ai/DeepSeek-V2
- num_layers: int#
60
5120
12288
- num_moe_experts: int#
160
1536
3072
- moe_layer_freq: Union[int, List[int]]#
‘field(…)’
- moe_router_topk: int#
6
- moe_router_num_groups: int#
8
- moe_router_group_topk: int#
3
- moe_router_topk_scaling_factor: float#
16.0
- moe_aux_loss_coeff: float#
0.001
- mscale: float#
0.707
- mscale_all_dim: float#
0.707
- vocab_size: int#
102400
- class bridge.models.deepseek.deepseek_provider.DeepSeekV2LiteModelProvider#
Bases:
bridge.models.deepseek.deepseek_provider.DeepSeekV2ModelProvider
DeepSeek-V2-Lite Model: https://github.com/deepseek-ai/DeepSeek-V2 HuggingFace: https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite
- num_layers: int#
27
2048
10944
- num_attention_heads: int#
16
- kv_channels: int#
16
- q_lora_rank: int#
None
- num_moe_experts: int#
64
1408
2816
- moe_layer_freq: Union[int, List[int]]#
‘field(…)’
- moe_router_topk: int#
6
- moe_router_num_groups: int#
1
- moe_router_group_topk: int#
1
- moe_router_topk_scaling_factor: float#
1.0
- vocab_size: int#
102400
- class bridge.models.deepseek.deepseek_provider.DeepSeekV3ModelProvider#
Bases:
bridge.models.deepseek.deepseek_provider.DeepSeekModelProvider
DeepSeek-V3 Model: https://github.com/deepseek-ai/DeepSeek-V3
- num_layers: int#
61
7168
18432
- num_moe_experts: int#
256
2048
2048
- moe_layer_freq: Union[int, List[int]]#
‘field(…)’
- moe_router_topk: int#
8
- moe_router_num_groups: int#
8
- moe_router_group_topk: int#
4
- moe_router_topk_scaling_factor: float#
2.5
- make_vocab_size_divisible_by: int#
1280
- moe_router_score_function: str#
‘sigmoid’
- moe_router_enable_expert_bias: bool#
True
- moe_router_bias_update_rate: float#
0.001
- mscale: float#
1.0
- mscale_all_dim: float#
1.0
- vocab_size: int#
129280
- class bridge.models.deepseek.deepseek_provider.MoonlightModelProvider16B#
Bases:
bridge.models.deepseek.deepseek_provider.DeepSeekModelProvider
Moonlight-16B-A3B Model: https://github.com/moonshotai/Moonlight-16B-A3B
Moonlight is based on DeepSeek-V3.
- max_position_embeddings: int#
4096
- num_layers: int#
27
2048
11264
- num_attention_heads: int#
16
- kv_channels: int#
16
- num_moe_experts: int#
64
1408
2816
- moe_layer_freq: Union[int, List[int]]#
‘field(…)’
- moe_router_topk: int#
6
- moe_router_num_groups: int#
1
- moe_router_group_topk: int#
1
- moe_router_topk_scaling_factor: float#
2.446
- moe_aux_loss_coeff: float#
0.001
- make_vocab_size_divisible_by: int#
1280
- moe_router_score_function: str#
‘sigmoid’
- moe_router_enable_expert_bias: bool#
True
- rotary_scaling_factor: float#
1.0
- mscale: float#
1.0
- mscale_all_dim: float#
1.0
- rotary_base: float#
50000
- layernorm_epsilon: float#
1e-05
- q_lora_rank: int#
None
- init_method_std: float#
0.02
- moe_router_bias_update_rate: float#
0.001
- rotary_percent: float#
1.0
- vocab_size: int#
163840
- bridge.models.deepseek.deepseek_provider._warn_deprecated(old_cls: str, new_cls: str) None #
- class bridge.models.deepseek.deepseek_provider.DeepSeekProvider#
Bases:
bridge.models.deepseek.deepseek_provider.DeepSeekModelProvider
Deprecated alias for
DeepSeekModelProvider
.Deprecated: This alias remains for backward compatibility and will be removed in a future release. Import and use
DeepSeekModelProvider
instead.- __post_init__() None #
- class bridge.models.deepseek.deepseek_provider.DeepSeekV2Provider#
Bases:
bridge.models.deepseek.deepseek_provider.DeepSeekV2ModelProvider
Deprecated alias for
DeepSeekV2ModelProvider
.Deprecated: This alias remains for backward compatibility and will be removed in a future release. Import and use
DeepSeekV2ModelProvider
instead.- __post_init__() None #
- class bridge.models.deepseek.deepseek_provider.DeepSeekV2LiteProvider#
Bases:
bridge.models.deepseek.deepseek_provider.DeepSeekV2LiteModelProvider
Deprecated alias for
DeepSeekV2LiteModelProvider
.Deprecated: This alias remains for backward compatibility and will be removed in a future release. Import and use
DeepSeekV2LiteModelProvider
instead.- __post_init__() None #
- class bridge.models.deepseek.deepseek_provider.DeepSeekV3Provider#
Bases:
bridge.models.deepseek.deepseek_provider.DeepSeekV3ModelProvider
Deprecated alias for
DeepSeekV3ModelProvider
.Deprecated: This alias remains for backward compatibility and will be removed in a future release. Import and use
DeepSeekV3ModelProvider
instead.- __post_init__() None #
- class bridge.models.deepseek.deepseek_provider.MoonlightProvider#
Bases:
bridge.models.deepseek.deepseek_provider.MoonlightModelProvider16B
Deprecated alias for
MoonlightModelProvider16B
.Deprecated: This alias remains for backward compatibility and will be removed in a future release. Import and use
MoonlightModelProvider16B
instead.- __post_init__() None #