emerging_optimizers.soap#
SOAP#
- class emerging_optimizers.soap.soap.SOAP(
- params,
- lr,
- betas=(0.9, 0.95),
- shampoo_beta=0.95,
- eps=1e-08,
- weight_decay=0.01,
- *,
- weight_decay_method='decoupled',
- use_nesterov=False,
- precondition_frequency=1,
- adam_warmup_steps=0,
- precondition_1d=False,
- correct_bias=True,
- fp32_matmul_prec='high',
- use_eigh=False,
- qr_fp32_matmul_prec='high',
- use_adaptive_criteria=False,
- adaptive_update_tolerance=1e-07,
- power_iter_steps=1,
- max_update_rms=0.0,
- use_kl_shampoo=False,
- correct_shampoo_beta_bias=None,
Implements a variant of SOAP (ShampoO with Adam in the Preconditioner eigenbasis) algorithm.
SOAP (https://arxiv.org/abs/2409.11321) is a preconditioned optimizer that combines the benefits of Shampooβs non-diagonal preconditioning with Adamβs adaptive learning rates. It uses gradient correlation matrix eigenbasis-based preconditioning to adapt to the local geometry of the optimization landscape.
- Parameters:
params (Iterable[Tensor] | Iterable[dict[str, Any]] | Iterable[tuple[str, Tensor]]) β Iterable of parameters to optimize or dicts defining parameter groups
lr (float) β The learning rate to use
betas (Tuple[float, float]) β Inner Adamβs betas parameters (b1, b2)
shampoo_beta (float) β Beta for the kronecker factor matrices (L and R in paper) moving average instead of betas[1] if >= 0
eps (float) β Inner Adamβs epsilon for numerical stability
weight_decay (float) β Weight decay coefficient
weight_decay_method (Literal['decoupled', 'independent', 'l2']) β Method to apply weight decay, see
WeightDecayMixinfor more details.use_nesterov (bool) β uses Nesterov momentum in Adam (https://cs229.stanford.edu/proj2015/054_report.pdf, https://openreview.net/forum?id=OM0jvwB8jIp57ZJjtNEZ)
precondition_frequency (int | Callable[[int], int]) β How often to update the preconditioner. Can be an integer for fixed frequency or a callable function that takes the current step as input and returns the frequency.
adam_warmup_steps (int) β How many steps to skip preconditioning in the beginning (i.e. use standard AdamW updates)
precondition_1d (bool) β Whether to precondition 1D gradients (like biases).
correct_bias (bool) β Whether to use bias correction in Inner Adam and Kronecker factor matrices EMA
fp32_matmul_prec (str) β Precision of the matmul operations in optimizer states GEMM operations
use_eigh (bool) β Whether to use full symmetric eigendecomposition (eigh) to compute the eigenbasis. If False, use orthogonal iteration to compute the eigenbasis.
qr_fp32_matmul_prec (str) β Precision of the matmul operations in QR decomposition.
use_adaptive_criteria (bool) β Whether to use criteria to determine if eigenbasis update is needed
adaptive_update_tolerance (float) β Tolerance threshold for the update criteria. Only used if use_adaptive_criteria is True.
power_iter_steps (int) β Number of power iteration steps to perform before QR decomposition. More steps can lead to better convergence but increased computation time.
max_update_rms (float) β Clip the update RMS to this value (0 means no clipping).
use_kl_shampoo (bool) β Whether to use KL-Shampoo correction.
correct_shampoo_beta_bias (bool | None) β Whether to correct shampoo beta bias. Decoupled it from correct_bias for testability because reference implementation of Soap doesnβt bias correct shampoo beta.
- emerging_optimizers.soap.soap.precondition(grad, eigenbasis_list=None, dims=None)[source]#
Projects the gradient to and from the eigenbases of the kronecker factor matrices.
This function performs tensor contractions between the input gradient and kronecker factor eigenbases.
- Parameters:
grad (Tensor) β Input tensor to be preconditioned
eigenbasis_list (List[Tensor] | None) β List of eigenbases for preconditioning. Each matrix should be a square matrix of eigenvectors.
dims (List[List[int]] | None) β Dimensions for tensor contraction. Default is [[0], [0]] which contracts the first dimension of grad with the first dimension of each eigenbasis matrix, for projecting into the eigenbasis. Use [[0], [1]] for projecting back to original space.
- Return type:
Example
>>> grad = torch.randn(10, 20) >>> Q = torch.randn(10, 10) >>> precondition(grad, [Q], dims=[[0], [0]])
- emerging_optimizers.soap.soap.init_kronecker_factors(grad, precondition_1d=False)[source]#
Initializes the kronecker factor matrices for the SOAP optimizer.
This function creates the initial Kronecker factor matrices (L and R) used for preconditioning. For 1D tensors (like biases), it can either skip preconditioning or create a single square kronecker factor matrix. For higher dimensional tensors, it creates a square kronecker factor matrix for each dimension.
- When precondition_1d is:
- False (default):
1D tensors (like biases) will skip SOAP preconditioning entirely
These parameters will use standard Adam-style updates
This is often desirable as biases typically have fewer parameters and simpler optimization landscapes
Can improve performance and reduce memory usage
- True:
All parameters, including 1D tensors, will use SOAP preconditioning
May be beneficial for certain architectures or training scenarios
- Parameters:
grad (Tensor) β Gradient tensor used to initialize the kronecker factor matrices. The shape of this tensor determines the size of the kronecker factor matrices.
precondition_1d (bool) β Whether to create kronecker factor matrices for 1D tensors (like biases). If False, 1D tensors will skip preconditioning.
- Returns:
- List of kronecker factor matrices (L and R in paper).
For 1D tensors with precondition_1d=False: List containing an empty tensor
For 1D tensors with precondition_1d=True: List containing a square matrix
For higher dimensional tensors: List of square matrices, one per dimension
- Return type:
List[torch.Tensor]
Example
>>> # For a 1D tensor (bias) >>> grad_1d = torch.randn(10) >>> precond_1d = init_kronecker_factors(grad_1d, precondition_1d=True) >>> print(len(precond_1d)) # 1 >>> print(precond_1d[0].shape) # (10, 10)
>>> # For a 2D tensor (weight matrix) >>> grad_2d = torch.randn(10, 20) >>> precond_2d = init_kronecker_factors(grad_2d) >>> print(len(precond_2d)) # 2 >>> print(precond_2d[0].shape) # (10, 10) >>> print(precond_2d[1].shape) # (20, 20)
- emerging_optimizers.soap.soap.update_kronecker_factors(
- kronecker_factor_list,
- grad,
- shampoo_beta,
- precondition_1d=False,
Updates the preconditioner matrices using gradient outer products.
This function updates the Kronecker factor matrices (L and R) used for preconditioning by computing and accumulating gradient outer products. For 1D tensors (like biases), it can optionally skip preconditioning or use a special 1D preconditioning strategy. It modifies the kronecker_factor_list in place.
- Parameters:
kronecker_factor_list (List[Tensor]) β List of preconditioner matrices (L and R) to update. Each matrix should be square and match the corresponding dimension of grad.
grad (Tensor) β Gradient tensor of the parameter being optimized
shampoo_beta (float) β Momentum coefficient for updating preconditioners. Controls how much weight to give to new vs old gradient statistics.
precondition_1d (bool) β Whether to apply preconditioning to 1D tensors (like biases). If False, 1D tensors will skip preconditioning.
- Return type:
None
Example
>>> grad = torch.randn(10, 20) >>> L = torch.zeros(10, 10) >>> R = torch.zeros(20, 20) >>> update_preconditioner([L, R], grad, shampoo_beta=0.95)
- emerging_optimizers.soap.soap.update_kronecker_factors_kl_shampoo(
- kronecker_factor_list,
- grad,
- shampoo_beta,
- eigenbasis_list,
- eps,
- eigval_exp=-1.0,
Updates the kronecker factor matrices in place using KL-Shampoo correction.
Implement KullbackβLeibler Minimization from https://arxiv.org/pdf/2509.03378
- Parameters:
kronecker_factor_list (List[Tensor]) β List of preconditioner matrices (L and R) to update.
grad (Tensor) β Gradient tensor of the parameter being optimized
shampoo_beta (float) β Momentum coefficient for updating preconditioners.
eigenbasis_list (List[Tensor]) β List of orthonormal eigenbases of the kronecker factor matrices
eps (float) β Small offset for numerical stability.
eigenval_exp β Exponent of the eigenvalues.
eigval_exp (float)
- Return type:
None
- emerging_optimizers.soap.soap.update_eigenbasis_and_momentum(
- kronecker_factor_list,
- eigenbasis_list,
- exp_avg_sq,
- momentum,
- use_eigh=False,
- use_adaptive_criteria=False,
- adaptive_update_tolerance=None,
- power_iter_steps=1,
- convert_to_float=True,
Updates the eigenbases using QR decomposition and power iteration or eigh.
This function performs an update of the eigenbases (QL and QR) used for preconditioning. It follows these steps:
Projects momentum back to the original basis
Updates the eigenbases using QR decomposition and power iteration (orthogonal iteration)
Projects momentum back to the new eigenbasis
- Parameters:
kronecker_factor_list (List[Tensor]) β List of preconditioner matrices (L and R) that define the optimization landscape. These are updated with gradient statistics.
eigenbasis_list (List[Tensor]) β List of current eigenbases (QL and QR) used for preconditioning. These will be updated by this function.
exp_avg_sq (Tensor) β Inner Adamβs second moment tensor, used for scaling the preconditioner updates. This tensor is modified in-place.
momentum (Tensor) β Inner Adamβs first moment tensor, used for tracking gradient momentum. This tensor is modified in-place.
use_eigh (bool) β Whether to use full symmetric eigendecomposition (eigh) to compute the eigenbasis. If False, use orthogonal iteration to compute the eigenbasis.
use_adaptive_criteria (bool) β Whether to use criteria to determine if eigenbasis update is needed
adaptive_update_tolerance (float | None) β Tolerance threshold for the update criteria. Only used if use_adaptive_criteria is True.
power_iter_steps (int) β Number of power iteration steps to perform before QR decomposition. More steps can lead to better convergence but increased computation time.
convert_to_float (bool) β Whether to convert the preconditioner matrices and their corresponding orthonormal matrices to float for amortized computation. Otherwise, they are left in their original type.
- Returns:
- A tuple containing:
List[torch.Tensor]: Updated list of eigenbases (QL and QR)
torch.Tensor: Updated momentum tensor projected to the new eigenbasis
- Return type:
Tuple[List[torch.Tensor], torch.Tensor]
Example
>>> L = torch.randn(10, 10) >>> R = torch.randn(20, 20) >>> QL = torch.randn(10, 10) >>> QR = torch.randn(20, 20) >>> exp_avg_sq = torch.randn(10, 20) >>> momentum = torch.randn(10, 20) >>> updated_eigenbases = update_eigenbasis( ... [L, R], [QL, QR], exp_avg_sq, momentum)
emerging_optimizers.soap.soap_utils#
- emerging_optimizers.soap.soap_utils.get_eigenbasis_eigh(
- kronecker_factor_list,
- convert_to_float=True,
- eigenbasis_list=None,
- use_adaptive_criteria=False,
- adaptive_update_tolerance=None,
- eps=None,
Computes the eigenbases of the preconditioner using torch.linalg.eigh decomposition.
- Parameters:
kronecker_factor_list (List[Tensor]) β Matrix List to compute eigenbases of
convert_to_float (bool) β If True, preconditioner matrices and their corresponding orthonormal matrices will be cast to float. Otherwise, they are left in their original type.
eigenbasis_list (List[Tensor] | None) β List of orthonormal eigenbases of the kronecker factor matrices
use_adaptive_criteria (bool) β Whether to use update criteria strategy
adaptive_update_tolerance (float | None) β Tolerance threshold for the normalized diagonal component of approximated eigenvalue matrix. If None, defaults to 1e-7, which is appropriate for single precision computations.
eps (float | None) β Small offset for numerical stability. If None, uses dtype-appropriate values (1e-7 for float32, 1e-15 for float64)
- Returns:
List of orthonormal kronecker factor eigenbases matrices
- Return type:
List[torch.Tensor]
Example
# Create sample Kronecker factors (symmetric positive definite matrices) k_factor1 = torch.randn(4, 4) k_factor1 = k_factor1 @ k_factor1.T # Make symmetric positive definite k_factor2 = torch.randn(5, 5) k_factor2 = k_factor2 @ k_factor2.T # Make symmetric positive definite # Get orthogonal matrices for these factors ortho_matrices = get_eigenbasis_eigh([k_factor1, k_factor2]) # ortho_matrices[0] has shape [4, 4] and ortho_matrices[1] has shape [5, 5]
- emerging_optimizers.soap.soap_utils.get_eigenbasis_qr(
- kronecker_factor_list,
- eigenbasis_list,
- exp_avg_sq,
- convert_to_float=True,
- use_adaptive_criteria=False,
- adaptive_update_tolerance=None,
- power_iter_steps=1,
Updates the eigenbases of the preconditioner using power iteration and QR.
Computes using multiple rounds of power iteration followed by QR decomposition (orthogonal iteration).
- Parameters:
kronecker_factor_list (List[Tensor]) β List containing preconditioner (\(GG^T\) and \(G^TG\))
eigenbasis_list (List[Tensor]) β List containing eigenbases (\(Q_L\) and \(Q_R\))
exp_avg_sq (Tensor) β inner adam second moment (exp_avg_sq). This tensor is modified in-place.
convert_to_float (bool) β If True, preconditioner matrices and their corresponding orthonormal matrices will be cast to float. Otherwise, they are left in their original type.
use_adaptive_criteria (bool) β Whether to use update criteria strategy
adaptive_update_tolerance (float | None) β Tolerance threshold for the normalized diagonal component of approximated eigenvalue matrix. If None, defaults to 1e-7, which is appropriate for single precision computations. This means adaptive update criteria will be used whenever there is a small change in the approximated eigenvalues matrix and QR will be used.
power_iter_steps (int) β Number of power iteration steps to perform before QR decomposition. More steps can lead to better convergence but increased computation time.
- Returns:
Updated list of orthonormal kronecker factor eigenbases matrices torch.Tensor: Updated (sorted) inner adam second moment
- Return type:
List[torch.Tensor]
Example
# Create sample Kronecker factors (symmetric positive definite matrices) n, m = 10, 20 k_factor1 = torch.randn(n, n) k_factor1 = k_factor1 @ k_factor1.T # Make symmetric positive definite k_factor2 = torch.randn(m, m) k_factor2 = k_factor2 @ k_factor2.T # Make symmetric positive definite # Get orthogonal matrices for these kronecker factors kronecker_factor_list = [k_factor1, k_factor2] eigenbasis_list = get_eigenbasis_eigh(kronecker_factor_list) # Perturb the kronecker factor matrices, simulating the effect of gradient updates perturbation = 1e-2*torch.randn(n, m) perturbed_kronecker_factor_list = [None, None] perturbed_kronecker_factor_list[0] = k_factor1 + perturbation@perturbation.T perturbed_kronecker_factor_list[1] = k_factor2 + perturbation.T@perturbation # Initialize exp_avg_sq tensor exp_avg_sq = torch.randn(n, m).abs() # Refine the orthogonal matrices using QR updated_ortho_matrices, updated_exp_avg_sq = get_eigenbasis_qr( perturbed_kronecker_factor_list, eigenbasis_list, exp_avg_sq )