nemo_automodel.components._diffusers.auto_diffusion_pipeline
#
Module Contents#
Classes#
Drop-in Diffusers pipeline that adds optional FSDP2/TP parallelization during from_pretrained. |
Functions#
Data#
API#
- nemo_automodel.components._diffusers.auto_diffusion_pipeline.logger#
‘getLogger(…)’
- nemo_automodel.components._diffusers.auto_diffusion_pipeline._choose_device(device: Optional[torch.device]) torch.device #
- nemo_automodel.components._diffusers.auto_diffusion_pipeline._iter_pipeline_modules(
- pipe: diffusers.DiffusionPipeline,
- nemo_automodel.components._diffusers.auto_diffusion_pipeline._move_module_to_device(
- module: torch.nn.Module,
- device: torch.device,
- torch_dtype: Any,
- class nemo_automodel.components._diffusers.auto_diffusion_pipeline.NeMoAutoDiffusionPipeline#
Bases:
diffusers.DiffusionPipeline
Drop-in Diffusers pipeline that adds optional FSDP2/TP parallelization during from_pretrained.
Features:
Accepts a per-component mapping from component name to FSDP2Manager
Moves all nn.Module components to the chosen device/dtype
Parallelizes only components present in the mapping using their manager
parallel_scheme:
Dict[str, FSDP2Manager]: component name -> manager used to parallelize that component
- classmethod from_pretrained(
- pretrained_model_name_or_path: str,
- *model_args,
- parallel_scheme: Optional[Dict[str, nemo_automodel.components.distributed.fsdp2.FSDP2Manager]] = None,
- device: Optional[torch.device] = None,
- torch_dtype: Any = 'auto',
- move_to_device: bool = True,
- **kwargs,