bridge.diffusion.recipes.flux.flux#
Module Contents#
Functions#
Return a pre-training configuration for FLUX 12B model. |
|
Return an SFT (supervised fine-tuning) configuration for FLUX 12B model. |
API#
- bridge.diffusion.recipes.flux.flux.flux_12b_pretrain_config() megatron.bridge.training.config.ConfigContainer#
Return a pre-training configuration for FLUX 12B model.
Default parallelism: TP=2, PP=1. Uses mock/synthetic data when data_paths is None. To customize (e.g. data paths, checkpoint dir), edit this recipe or add a new recipe that builds on these defaults.
- bridge.diffusion.recipes.flux.flux.flux_12b_sft_config(
- pretrained_checkpoint: str | None = None,
Return an SFT (supervised fine-tuning) configuration for FLUX 12B model.
Uses the same defaults as flux_12b_pretrain_config() and overrides checkpoint to load from pretrained_checkpoint when provided.