Migrate Precision Configurations from NeMo 1.0 to NeMo 2.0#
In NeMo 2.0, precision configuration has been centralized to the MegatronMixedPrecision
plugin.
NeMo 1.0 (Previous Release)#
In NeMo 1.0, various model and training precision settings (including FP8 configuration) are spread throughout the YAML configuration file.
trainer:
precision: bf16
...
model:
native_amp_init_scale: 4294967296
native_amp_growth_interval: 1000
...
fp8: False # enables fp8 in TransformerLayer forward
fp8_e4m3: False # sets E4M3 FP8 format
fp8_hybrid: False # sets hybrid FP8 format
fp8_margin: 0
fp8_amax_history_len: 1024
fp8_amax_compute_algo: max
NeMo 2.0 (New Release)#
In NeMo 2.0, these settings are controlled using the MegatronMixedPrecision
plugin.
from nemo import lightning as nl
plugin = nl.MegatronMixedPrecision(
precision="bf16",
fp16_initial_loss_scale=4294967296,
fp16_loss_scale_window=1000,
fp8=None, # Can be either "e4m3" or "hybrid"
fp8_margin=0,
fp8_amax_history_len=1024,
fp8_amax_compute_algo="max",
)
trainer = nl.Trainer(
plugins=plugin,
...
)
Migrate Precision Configurations#
Locate and remove all precision and fp8 configurations in your NeMo 1.0 YAML config file.
Add the following import to your Python script:
from nemo import lightning as nl
Create a
MegatronMixedPrecision
plugin with the appropriate parameters:plugin = nl.MegatronMixedPrecision( precision="bf16", fp16_initial_loss_scale=4294967296, fp16_loss_scale_window=1000, fp8=None, # Can be either "e4m3" or "hybrid" fp8_margin=0, fp8_amax_history_len=1024, fp8_amax_compute_algo="max", )
Adjust the arguments in the plugin to match your previous YAML configuration.
Add the precision plugin to your
Trainer
(see Trainer migration guide):trainer = nl.Trainer( ... plugins=plugin, ... )
Note
TransformerEngine must be installed to use FP8 precision.