Torch Distributed Checkpoint (TDC)

Torch Distributed Checkpoint enables saving and loading models from multiple ranks in parallel. You can use this parameter to save on any number of ranks in parallel.

Torch Distributed Checkpoint allows you to change tensor_model_parallel_size and pipeline_model_parallel_size for the same checkpoint even during the training session.

NeMo Framework supports TDC for GPT-based models such as GPT-3, Llama, etc.

Model Training

Setp up the TDC parameter in the model configuration:

Copy
Copied!
            

torch_distributed_checkpoint: True # Set to True to use PyTorch distributed checkpoint format.

Please, note that TDC is only supported with mcore_gpt=True.

Model Fine-Tuning

Define the configuration for FSDP in the model configuration:

Copy
Copied!
            

torch_distributed_checkpoint: True # Set to True to use PyTorch distributed checkpoint format.

Please, note that TDC is only supported with mcore_gpt=True.

Previous Fully Sharded Data Parallel (FSDP)
Next Model Export to TensorRT-LLM
© Copyright 2023-2024, NVIDIA. Last updated on May 3, 2024.