Important

You are viewing the NeMo 2.0 documentation. This release introduces significant changes to the API and a new library, NeMo Run. We are currently porting all features from NeMo 1.0 to 2.0. For documentation on previous versions or features not yet available in 2.0, please refer to the NeMo 24.07 documentation.

Migrate Precision Configurations from NeMo 1.0 to NeMo 2.0#

In NeMo 2.0, precision configuration has been centralized to the MegatronMixedPrecision plugin.

NeMo 1.0 (Previous Release)#

In NeMo 1.0, various model and training precision settings (including FP8 configuration) are spread throughout the YAML configuration file.

trainer:
  precision: bf16
  ...
model:
  native_amp_init_scale: 4294967296
  native_amp_growth_interval: 1000
  ...
  fp8: False # enables fp8 in TransformerLayer forward
  fp8_e4m3: False # sets E4M3 FP8 format
  fp8_hybrid: False # sets hybrid FP8 format
  fp8_margin: 0
  fp8_amax_history_len: 1024
  fp8_amax_compute_algo: max

NeMo 2.0 (New Release)#

In NeMo 2.0, these settings are controlled using the MegatronMixedPrecision plugin.

from nemo import lightning as nl

plugin = nl.MegatronMixedPrecision(
    precision="bf16",
    fp16_initial_loss_scale=4294967296,
    fp16_loss_scale_window=1000,
    fp8=None, # Can be either "e4m3" or "hybrid"
    fp8_margin=0,
    fp8_amax_history_len=1024,
    fp8_amax_compute_algo="max",

)

trainer = nl.Trainer(
   plugins=plugin,
   ...
)

Migrate Precision Configurations#

  1. Locate and remove all precision and fp8 configurations in your NeMo 1.0 YAML config file.

  2. Add the following import to your Python script:

    from nemo import lightning as nl
    
  3. Create a MegatronMixedPrecision plugin with the appropriate parameters:

    plugin = nl.MegatronMixedPrecision(
        precision="bf16",
        fp16_initial_loss_scale=4294967296,
        fp16_loss_scale_window=1000,
        fp8=None, # Can be either "e4m3" or "hybrid"
        fp8_margin=0,
        fp8_amax_history_len=1024,
        fp8_amax_compute_algo="max",
    
    )
    
  4. Adjust the arguments in the plugin to match your previous YAML configuration.

  5. Add the precision plugin to your Trainer (see Trainer migration guide):

    trainer = nl.Trainer(
        ...
        plugins=plugin,
        ...
    )
    

Note

  • TransformerEngine must be installed to use FP8 precision.