# pylops.optimization.cls_leastsquares.NormalEquationsInversion¶

class pylops.optimization.cls_leastsquares.NormalEquationsInversion(Op, callbacks=None)[source]

Inversion of normal equations.

Solve the regularized normal equations for a system of equations given the operator Op, a data weighting operator Weight and optionally a list of regularization terms Regs and/or NRegs.

Parameters: Op : pylops.LinearOperator Operator to invert of size $$[N \times M]$$.

RegularizedInversion
Regularized inversion
PreconditionedInversion
Preconditioned inversion

Notes

Solve the following normal equations for a system of regularized equations given the operator $$\mathbf{Op}$$, a data weighting operator $$\mathbf{W}$$, a list of regularization terms ($$\mathbf{R}_i$$ and/or $$\mathbf{N}_i$$), the data $$\mathbf{y}$$ and regularization data $$\mathbf{y}_{\mathbf{R}_i}$$, and the damping factors $$\epsilon_I$$, $$\epsilon_{\mathbf{R}_i}$$ and $$\epsilon_{\mathbf{N}_i}$$:

$( \mathbf{Op}^T \mathbf{W} \mathbf{Op} + \sum_i \epsilon_{\mathbf{R}_i}^2 \mathbf{R}_i^T \mathbf{R}_i + \sum_i \epsilon_{\mathbf{N}_i}^2 \mathbf{N}_i + \epsilon_I^2 \mathbf{I} ) \mathbf{x} = \mathbf{Op}^T \mathbf{W} \mathbf{y} + \sum_i \epsilon_{\mathbf{R}_i}^2 \mathbf{R}_i^T \mathbf{y}_{\mathbf{R}_i}$

Note that the data term of the regularizations $$\mathbf{N}_i$$ is implicitly assumed to be zero.

Methods

 __init__(Op[, callbacks]) Initialize self. callback(x, *args, **kwargs) Callback routine finalize(*args[, show]) Finalize solver run(x[, engine, show]) Run solver setup(y, Regs[, Weight, dataregs, epsI, …]) Setup solver solve(y, Regs[, x0, Weight, dataregs, epsI, …]) Run entire solver step() Run one step of solver
setup(y, Regs, Weight=None, dataregs=None, epsI=0, epsRs=None, NRegs=None, epsNRs=None, show=False)[source]

Setup solver

Parameters: y : np.ndarray Data of size $$[N \times 1]$$ Regs : list Regularization operators (None to avoid adding regularization) Weight : pylops.LinearOperator, optional Weight operator dataregs : list, optional Regularization data (must have the same number of elements as Regs) epsI : float, optional Tikhonov damping epsRs : list, optional Regularization dampings (must have the same number of elements as Regs) NRegs : list Normal regularization operators (None to avoid adding regularization). Such operators must apply the chain of the forward and the adjoint in one go. This can be convenient in cases where a faster implementation is available compared to applying the forward followed by the adjoint. epsNRs : list, optional Regularization dampings for normal operators (must have the same number of elements as NRegs) show : bool, optional Display setup log
step()[source]

Run one step of solver

This method is used to run one step of the solver. Users can change the function signature by including any other input parameter required when applying one step of the solver

Parameters: x : np.ndarray Current model vector to be updated by a step of the solver show : bool, optional Display step log
run(x, engine='scipy', show=False, **kwargs_solver)[source]

Run solver

Parameters: x : np.ndarray Current model vector to be updated by multiple steps of the solver. If None, x is assumed to be a zero vector engine : str, optional Solver to use (scipy or pylops) show : bool, optional Display iterations log **kwargs_solver Arbitrary keyword arguments for chosen solver (scipy.sparse.linalg.cg and pylops.optimization.solver.cg are used as default for numpy and cupy data, respectively) Note When user does not supply atol, it is set to “legacy”. xinv : numpy.ndarray Inverted model. istop : int Convergence information (only when using scipy.sparse.linalg.cg): 0: successful exit >0: convergence to tolerance not achieved, number of iterations <0: illegal input or breakdown
solve(y, Regs, x0=None, Weight=None, dataregs=None, epsI=0, epsRs=None, NRegs=None, epsNRs=None, engine='scipy', show=False, **kwargs_solver)[source]

Run entire solver

Parameters: y : np.ndarray Data of size $$[N \times 1]$$ Regs : list Regularization operators (None to avoid adding regularization) x0 : np.ndarray, optional Initial guess of size $$[M \times 1]$$. If None, initialize internally as zero vector Weight : pylops.LinearOperator, optional Weight operator dataregs : list, optional Regularization data (must have the same number of elements as Regs) epsI : float, optional Tikhonov damping epsRs : list, optional Regularization dampings (must have the same number of elements as Regs) NRegs : list Normal regularization operators (None to avoid adding regularization). Such operators must apply the chain of the forward and the adjoint in one go. This can be convenient in cases where a faster implementation is available compared to applying the forward followed by the adjoint. epsNRs : list, optional Regularization dampings for normal operators (must have the same number of elements as NRegs) show : bool, optional Display setup log x : np.ndarray Estimated model of size $$[N \times 1]$$ istop : int Convergence information (only when using scipy.sparse.linalg.cg): 0: successful exit >0: convergence to tolerance not achieved, number of iterations <0: illegal input or breakdown