emerging_optimizers.scalar_optimizers#
- emerging_optimizers.scalar_optimizers.calculate_adam_update(
- grad,
- exp_avg,
- exp_avg_sq,
- betas,
- correct_bias,
- use_nesterov,
- step,
- eps,
Performs the Adam update.
This function performs the computation of 1 step of Adam.
The update rule is as follows:
\[m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t \\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2 \\ \hat{m}_t = \frac{m_t}{1 - \beta_1^t} \\ \hat{v}_t = \frac{v_t}{1 - \beta_2^t} \\ \text{update} = \frac{\hat{m}_t}{\sqrt{\hat{v}_t} + \epsilon} \\ \]- Parameters:
grad (Tensor) – The gradient tensor.
exp_avg (Tensor) – The accumulated first moment of the gradient.
exp_avg_sq (Tensor) – The accumulated second moment of the gradient.
betas (Tuple[float, float]) – The EMA beta coefficients for the Adam update.
correct_bias (bool) – Whether to correct the bias of the Adam update.
use_nesterov (bool) – Whether to use nesterov momentum.
step (int) – The current step of the optimizer, used to compute the bias correction terms.
eps (float) – The epsilon for the Adam second moment update.
- Returns:
The Adam-update.
- Return type:
- emerging_optimizers.scalar_optimizers.calculate_ademamix_update(
- grad,
- exp_avg_fast,
- exp_avg_slow,
- exp_avg_sq,
- num_beta_slow_warmup_steps,
- num_alpha_warmup_steps,
- betas,
- step,
- eps,
- correct_bias,
- alpha=2,
Performs AdEMAMix update.
This function performs the computation of 1 step of AdEMAMix. Based on apple/ml-ademamix and https://arxiv.org/abs/2409.03137.
The update rule is as follows:
\[m_t^{\text{fast}} = \beta_{\text{fast}} m_{t-1}^{\text{fast}} + (1 - \beta_{\text{fast}}) g_t \\ m_t^{\text{slow}} = \beta_{\text{slow}} m_{t-1}^{\text{slow}} + (1 - \beta_{\text{slow}}) g_t \\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2 \\ \hat{m}_t^{\text{fast}} = \frac{m_t^{\text{fast}}}{1 - \beta_{\text{fast}}^t} \\ \hat{v}_t = \frac{v_t}{1 - \beta_2^t} \\ \text{update} = \frac{\hat{m}_t^{\text{fast}} + \alpha m_t^{\text{slow}}}{\sqrt{\hat{v}_t} + \epsilon} \]- Parameters:
grad (Tensor) – The gradient tensor.
exp_avg_fast (Tensor) – The accumulated first moment of the gradient with fast time constant.
exp_avg_slow (Tensor) – The accumulated first moment of the gradient with slow time constant.
exp_avg_sq (Tensor) – The accumulated second moment of the gradient.
num_beta_slow_warmup_steps (int | None) – Number of warmup steps used to increase beta_slow
num_alpha_warmup_steps (int | None) – Number of warmup steps used to increase alpha
betas (Tuple[float, float, float]) – The EMA beta coefficients for the Adam update.
step (int) – The current step of the optimizer, used to compute the bias correction terms.
eps (float) – The epsilon for the Adam second moment update.
correct_bias (bool) – Whether to correct the bias of the AdEMAMix update.
alpha (float) – Coeficient for mixing the current gradient and EMA, the final value to use in case of scheduling.
- Returns:
The AdEMAMix update.
- Return type:
- emerging_optimizers.scalar_optimizers.calculate_laprop_update(
- grad,
- exp_avg,
- exp_avg_sq,
- correct_bias,
- betas,
- step,
- eps,
Performs the LAProp/Normalized SGD with momentum update.
LAProp can be seen as RMSProp with a momentum term, or normalized SGD with momentum. Based on Z-T-WANG/LaProp-Optimizer and https://arxiv.org/abs/2002.04839.
The update rule is as follows:
\[v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2 \\ \hat{v}_t = \frac{v_t}{1 - \beta_2^t} \\ g'_t = \frac{g_t}{\sqrt{\hat{v}_t} + \epsilon} \\ m_t = \beta_1 m_{t-1} + (1 - \beta_1) g'_t \\ \hat{m}_t = \frac{m_t}{1 - \beta_1^t} \\ \text{update} = \hat{m}_t \]- Parameters:
grad (Tensor) – The gradient tensor.
exp_avg (Tensor) – The exponential moving average of the gradient.
exp_avg_sq (Tensor) – The exponential moving average of the gradient squared.
correct_bias (bool) – Whether to correct the bias of the Adam update.
betas (Tuple[float, float]) – The betas for the exponential moving average.
step (int) – The current step.
eps (float) – The epsilon for the second moment update.
- Returns:
The LAProp update.
- Return type:
- emerging_optimizers.scalar_optimizers.calculate_lion_update(
- grad,
- exp_avg,
- momentum_beta,
- momentum_beta2=None,
Performs the Lion update.
This function performs the computation of 1 step of Lion update.
The update rule is as follows:
\[\text{update} = \text{sign}(\beta_1 m_{t-1} + (1 - \beta_1) g_t) \\ m_t = \beta_2 m_{t-1} + (1 - \beta_2) g_t \]- Parameters:
- Returns:
The Lion update.
- Return type:
- emerging_optimizers.scalar_optimizers.calculate_signum_update(
- grad,
- exp_avg,
- momentum_beta,
- correct_bias,
- use_nesterov,
- step,
- use_shape_scaling=False,
Performs the sign-SGD or Signum update.
This function performs the computation of 1 step of sign-SGD or Signum. Based on https://arxiv.org/abs/1802.04434. When using signSGD with shape scaling, general recommendation is to scale \(lr = \text{adam lr} \cdot \text{network width} \cdot \frac{2}{\text{rows} + \text{cols}}\). This is for learning rate transfer with width scaling (https://arxiv.org/abs/2506.07254v1).
The update rule is as follows:
\[m_t = \beta m_{t-1} + (1 - \beta) g_t \\ \hat{m}_t = \frac{m_t}{1 - \beta^t} \\ \text{update} = \text{sign}(\hat{m}_t) \]- Parameters:
grad (Tensor) – The gradient tensor.
exp_avg (Tensor) – The accumulated first moment of the gradient.
momentum_beta (float) – The EMA beta coefficients for the momentum update.
correct_bias (bool) – Whether to correct the bias of the momentum update.
use_nesterov (bool) – Whether to use nesterov momentum.
step (int) – The current step of the optimizer, used to compute the bias correction terms.
use_shape_scaling (bool) – Whether to scale the update by the shape of the tensor.
- Returns:
The sign-SGD/Signum update.
- Return type:
- emerging_optimizers.scalar_optimizers.calculate_sim_ademamix_update(
- grad,
- exp_avg,
- exp_avg_sq,
- num_beta_fast_warmup_steps,
- min_beta_fast,
- betas,
- step,
- eps,
- correct_bias,
- alpha=2,
Performs simplified AdEMAMix update.
This function performs the computation of 1 step of simplified AdEMAMix. Based on DepenM/Simplified-AdEMAMix and https://arxiv.org/abs/2409.03137.
The update rule is as follows:
\[m_t = \beta_{\text{fast}} m_{t-1} + g_t \\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2 \\ \hat{m}_t = \frac{m_t}{(1 - \beta_{\text{fast}}^t) / (1 - \beta_{\text{fast}})} \\ \hat{v}_t = \frac{v_t}{1 - \beta_2^t} \\ \text{update} = \frac{\alpha g_t + \hat{m}_t}{\sqrt{\hat{v}_t} + \epsilon} \]- Parameters:
grad (Tensor) – The gradient tensor.
exp_avg (Tensor) – The accumulated first moment of the gradient.
exp_avg_sq (Tensor) – The accumulated second moment of the gradient.
num_beta_fast_warmup_steps (int | None) – Number of warmup steps used to increase beta_fast
min_beta_fast (float) – The minimum beta_fast value used at initialization
betas (Tuple[float, float]) – The EMA beta coefficients for the Adam update.
step (int) – The current step of the optimizer, used to compute the bias correction terms.
eps (float) – The epsilon for the Adam second moment update.
correct_bias (bool) – Whether to correct the bias of the AdEMAMix update.
alpha (float) – Coeficient for mixing the current gradient and EMA.
- Returns:
The simplified-AdEMAMix update.
- Return type: