-
class
CosineReducer
(rate=0.5) Bases:
object
Reduces the LR based on initial LR, current epoch and total number of epochs.
- Parameters
rate (float) – rate of reduction
-
reduce
(init_learning_rate, cur_learning_rate, cur_step, cur_epoch, total_epochs) Computes LR based on initial LR, current epoch and total number of epochs.
- Parameters
init_learning_rate – initial LR value
cur_learning_rate – current LR value
cur_step – current training step
cur_epoch – current epoch
total_epochs – total number of epochs
Returns:
-
class
OnPlateauReducer
(plateau_count_trigger, reduction_rate) Bases:
object
Class for reducing learning rates on plateau policy
Reduces LR after a plateau of validation metric stopped improving for certain number of times.
- Parameters
plateau_count_trigger – how many plateaus must be reached before lr is reduced
reduction_rate – multiplier for each lr reduction
-
reduce
(cur_learning_rate, cur_val_metric)
-
class
OnStepDecay
(decay_ratio: float, decay_freq: int) Bases:
object
Class for decaying learning rate based on the number of steps. This policy decays the Learning rate by decay_ratio specified every decay_freq steps.
- Parameters
decay_ratio (float) – ratio for learning rate decay
decay_freq (int) – learning rate decay frequency (num steps)
-
reduce
(cur_learning_rate, cur_step) Compute new learning rate.
The LR is adjusted to cur_learning_rate * ratio every decay_freq number of steps.
- Parameters
cur_learning_rate – current LR value
cur_step – current training step
Returns: new LR value
-
class
PolyReducer
(poly_power) Bases:
object
Class for reducing learning rate based on the epoch progress lr = lr_init * (1 - e / total_epoch) ** poly_power
Constructor
- Parameters
poly_power – reduce power
-
reduce
(init_learning_rate, cur_learning_rate, cur_step, cur_epoch, total_epochs)