pylops.optimization.cls_basic.CGLS¶
-
class
pylops.optimization.cls_basic.
CGLS
(Op, callbacks=None)[source]¶ Conjugate gradient least squares
Solve an overdetermined system of equations given an operator
Op
and datay
using conjugate gradient iterations.Parameters: - Op :
pylops.LinearOperator
Operator to invert of size \([N \times M]\)
Notes
Minimize the following functional using conjugate gradient iterations:
\[J = || \mathbf{y} - \mathbf{Op}\,\mathbf{x} ||_2^2 + \epsilon^2 || \mathbf{x} ||_2^2\]where \(\epsilon\) is the damping coefficient.
Methods
__init__
(Op[, callbacks])Initialize self. callback
(x, *args, **kwargs)Callback routine finalize
([show])Finalize solver run
(x[, niter, show, itershow])Run solver setup
(y[, x0, niter, damp, tol, show])Setup solver solve
(y[, x0, niter, damp, tol, show, itershow])Run entire solver step
(x[, show])Run one step of solver -
setup
(y, x0=None, niter=None, damp=0.0, tol=0.0001, show=False)[source]¶ Setup solver
Parameters: - y :
np.ndarray
Data of size \([N \times 1]\)
- x0 :
np.ndarray
, optional Initial guess of size \([M \times 1]\). If
None
, initialize internally as zero vector- niter :
int
, optional Number of iterations (default to
None
in case a user wants to manually step over the solver)- damp :
float
, optional Damping coefficient
- tol :
float
, optional Tolerance on residual norm
- show :
bool
, optional Display setup log
Returns: - x :
np.ndarray
Initial guess of size \([N \times 1]\)
- y :
-
step
(x, show=False)[source]¶ Run one step of solver
Parameters: - x :
np.ndarray
Current model vector to be updated by a step of CG
- show :
bool
, optional Display iteration log
- x :
-
run
(x, niter=None, show=False, itershow=[10, 10, 10])[source]¶ Run solver
Parameters: - x :
np.ndarray
Current model vector to be updated by multiple steps of CGLS
- niter :
int
, optional Number of iterations. Can be set to
None
if already provided in the setup call- show :
bool
, optional Display iterations log
- itershow :
list
, optional Display set log for the first N1 steps, last N2 steps, and every N3 steps in between where N1, N2, N3 are the three element of the list.
Returns: - x :
np.ndarray
Estimated model of size \([M \times 1]\)
- x :
-
finalize
(show=False)[source]¶ Finalize solver
Parameters: - show :
bool
, optional Display finalize log
- show :
-
solve
(y, x0=None, niter=10, damp=0.0, tol=0.0001, show=False, itershow=[10, 10, 10])[source]¶ Run entire solver
Parameters: - y :
np.ndarray
Data of size \([N \times 1]\)
- x0 :
np.ndarray
Initial guess of size \([M \times 1]\). If
None
, initialize internally as zero vector- niter :
int
, optional Number of iterations (default to
None
in case a user wants to manually step over the solver)- damp :
float
, optional Damping coefficient
- tol :
float
, optional Tolerance on residual norm
- show :
bool
, optional Display logs
- itershow :
list
, optional Display set log for the first N1 steps, last N2 steps, and every N3 steps in between where N1, N2, N3 are the three element of the list.
Returns: - x :
np.ndarray
- Estimated model of size \([M \times 1]\)
- istop :
int
Gives the reason for termination
1
means \(\mathbf{x}\) is an approximate solution to \(\mathbf{y} = \mathbf{Op}\,\mathbf{x}\)2
means \(\mathbf{x}\) approximately solves the least-squares problem- iit :
int
Iteration number upon termination
- r1norm :
float
\(||\mathbf{r}||_2\), where \(\mathbf{r} = \mathbf{y} - \mathbf{Op}\,\mathbf{x}\)
- r2norm :
float
\(\sqrt{\mathbf{r}^T\mathbf{r} + \epsilon^2 \mathbf{x}^T\mathbf{x}}\). Equal to
r1norm
if \(\epsilon=0\)- cost :
numpy.ndarray
, optional History of r1norm through iterations
- y :
- Op :