Migration guide to use lightning 2.0#
trainer.strategy=autoas lightning 2.0 doesn’t have None strategy.
resume_from_checkpointif being used as a trainer flag and pass the path to Trainer.fit(ckpt_path=”…”) method.
trainer.strategy = "ddp_find_unused_parameters_true"if there are unused parameters in your model as lightning 2.0 has find_unused_parameters as False by default. Reference: NeMo PR 6433. More details about this change: lightning PR 16611.
If used Trainer’s flag
replace_sampler_ddpreplace it with use_distributed_sampler.
CheckpointConnectorreplace it with _CheckpointConnector.
To set or get
trainer.ckpt_pathdirectly instead of calling protected API via
import loadfrom pytorch_lightning.utilities.cloud_io to
from pytorch_lightning.plugins.precision.native_amp import NativeMixedPrecisionPluginfrom replace it with from pytorch_lightning.plugins.precision import MixedPrecisionPlugin.
Lightning 2.0 adds
'bf16-mixed'as the preicison values for fp16 mixed precision and bf16 mixed precision respectively. For backward compatbility
'bf16'also perform mixed precision and is equivalent to
'bf16-mixed'respectively. However, lightning recommends to use
'bf16-mixed'to make it less ambiguous. Due to this,
MegatronHalfPrecisionPlugin'sparent class from lightning
MixedPrecisionPluginclass, expects the precision arg to be
'bf16-mixed'. As a result it’s required to pass
MixedPrecisionPLuginwhenever the precision passed is any of
[16, '16', '16-mixed']or
['bf16', 'bf16-mixed']. This can be taken care as shown here: NeMo upgrade to lightning 2.0 PR and here: MixedPrecisionPlugin. Also,
'32-true'is added as a precsion value for pure fp32 along with
'32'that existed. This can be taken into account as shown here in the NeMo upgrade to lightning 2.0 PR.
Lightning 2.0 renames epoch end hooks from
on_test_epoch_end. The renamed hooks do not accept the outputs arg but instead outputs needs to be defined as an instance variable of the model class to which the outputs of the step needs to be manually appended. More detailed examples implementing this can be found under migration guide of lightning’s PR 16520. Example from NeMo can be found here.
Lightning 2.0 is not currently supporting multiple dataloders for validation and testing in case of
dataloader_iter. The support for this will be added back soon in an upcoming release. If
dataloader_iteris being used and your config passes multiple files to
test_ds.file_names, please use just one file until this issue is fixed with pytorch lightning.
With lightning 2.0 it’s required to set
num_sanity_val_stepsto be a multiple of number of microbatches while using
dataloader_iter(applies only to Megatron files that use dataloader_iter) for all pretraining files (not downstream tasks like finetuning). This is being taken care internally in NeMo and does not require anything to be done by the user. However, if you are a developer of NeMo and are building a new model for pretraining that uses
dataloader_iterinstead of batch in
validation_stepmethods please make sure to call
build_train_valid_test_datasets methodof your model.
If model is being wrapped with
configure_ddpmethod please replace it with
_LightningModuleWrapperBaseas being done here: NeMo upgrade to lightning 2.0 PR.
pre_configure_ddp()in your DDP, remove it as it’s not required anymore. NeMo upgrade to lightning 2.0 PR.
If any of the tests use CPU as the device, ensure to explicitly pass it in the trainer as
trainer = pl.Trainer(max_epochs=1, accelerator='cpu')since deafult val in PTL >= 2.0 is auto and it picks cuda.
from pytorch_lightning.loops import TrainingEpochLoop, replace
trainer.fit_loop.max_steps, replace it with