# pylops.optimization.cls_sparsity.ISTA¶

class pylops.optimization.cls_sparsity.ISTA(Op, callbacks=None)[source]

Iterative Shrinkage-Thresholding Algorithm (ISTA).

Solve an optimization problem with $$L_p, \; p=0, 0.5, 1$$ regularization, given the operator Op and data y. The operator can be real or complex, and should ideally be either square $$N=M$$ or underdetermined $$N<M$$.

Parameters: Op : pylops.LinearOperator Operator to invert NotImplementedError If threshkind is different from hard, soft, half, soft-percentile, or half-percentile ValueError If perc=None when threshkind is soft-percentile or half-percentile ValueError If monitorres=True and residual increases

OMP
Orthogonal Matching Pursuit (OMP).
FISTA
Fast Iterative Shrinkage-Thresholding Algorithm (FISTA).
SPGL1
Spectral Projected-Gradient for L1 norm (SPGL1).
SplitBregman
Split Bregman for mixed L2-L1 norms.

Notes

Solves the following synthesis problem for the operator $$\mathbf{Op}$$ and the data $$\mathbf{y}$$:

$J = \|\mathbf{y} - \mathbf{Op}\,\mathbf{x}\|_2^2 + \epsilon \|\mathbf{x}\|_p$

or the analysis problem:

$J = \|\mathbf{y} - \mathbf{Op}\,\mathbf{x}\|_2^2 + \epsilon \|\mathbf{SOp}^H\,\mathbf{x}\|_p$

if SOp is provided. Note that in the first case, SOp should be assimilated in the modelling operator (i.e., Op=GOp * SOp).

The Iterative Shrinkage-Thresholding Algorithms (ISTA)  is used, where $$p=0, 0.5, 1$$. This is a very simple iterative algorithm which applies the following step:

$\mathbf{x}^{(i+1)} = T_{(\epsilon \alpha /2, p)} \left(\mathbf{x}^{(i)} + \alpha\,\mathbf{Op}^H \left(\mathbf{y} - \mathbf{Op}\,\mathbf{x}^{(i)}\right)\right)$

or

$\mathbf{x}^{(i+1)} = \mathbf{SOp}\,\left\{T_{(\epsilon \alpha /2, p)} \mathbf{SOp}^H\,\left(\mathbf{x}^{(i)} + \alpha\,\mathbf{Op}^H \left(\mathbf{y} - \mathbf{Op} \,\mathbf{x}^{(i)}\right)\right)\right\}$

where $$\epsilon \alpha /2$$ is the threshold and $$T_{(\tau, p)}$$ is the thresholding rule. The most common variant of ISTA uses the so-called soft-thresholding rule $$T(\tau, p=1)$$. Alternatively an hard-thresholding rule is used in the case of $$p=0$$ or a half-thresholding rule is used in the case of $$p=1/2$$. Finally, percentile bases thresholds are also implemented: the damping factor is not used anymore an the threshold changes at every iteration based on the computed percentile.

  Daubechies, I., Defrise, M., and De Mol, C., “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint”, Communications on pure and applied mathematics, vol. 57, pp. 1413-1457. 2004.

Methods

 __init__(Op[, callbacks]) Initialize self. callback(x, *args, **kwargs) Callback routine finalize([show]) Finalize solver run(x[, niter, show, itershow]) Run solver setup(y[, x0, niter, SOp, eps, alpha, …]) Setup solver solve(y[, x0, niter, SOp, eps, alpha, …]) Run entire solver step(x[, show]) Run one step of solver
setup(y, x0=None, niter=None, SOp=None, eps=0.1, alpha=None, eigsdict=None, tol=1e-10, threshkind='soft', perc=None, decay=None, monitorres=False, show=False)[source]

Setup solver

Parameters: y : np.ndarray Data of size $$[N \times 1]$$ x0: :obj:numpy.ndarray, optional Initial guess niter : int Number of iterations SOp : pylops.LinearOperator, optional Regularization operator (use when solving the analysis problem) eps : float, optional Sparsity damping alpha : float, optional Step size. To guarantee convergence, ensure $$\alpha \le 1/\lambda_\text{max}$$, where $$\lambda_\text{max}$$ is the largest eigenvalue of $$\mathbf{Op}^H\mathbf{Op}$$. If None, the maximum eigenvalue is estimated and the optimal step size is chosen as $$1/\lambda_\text{max}$$. If provided, the convergence criterion will not be checked internally. eigsdict : dict, optional Dictionary of parameters to be passed to pylops.LinearOperator.eigs method when computing the maximum eigenvalue tol : float, optional Tolerance. Stop iterations if difference between inverted model at subsequent iterations is smaller than tol threshkind : str, optional Kind of thresholding (‘hard’, ‘soft’, ‘half’, ‘hard-percentile’, ‘soft-percentile’, or ‘half-percentile’ - ‘soft’ used as default) perc : float, optional Percentile, as percentage of values to be kept by thresholding (to be provided when thresholding is soft-percentile or half-percentile) decay : numpy.ndarray, optional Decay factor to be applied to thresholding during iterations monitorres : bool, optional Monitor that residual is decreasing show : bool, optional Display setup log
step(x, show=False)[source]

Run one step of solver

Parameters: x : np.ndarray Current model vector to be updated by a step of ISTA show : bool, optional Display iteration log x : np.ndarray Updated model vector xupdate : float Norm of the update
run(x, niter=None, show=False, itershow=[10, 10, 10])[source]

Run solver

Parameters: x : np.ndarray Current model vector to be updated by multiple steps of CG niter : int, optional Number of iterations. Can be set to None if already provided in the setup call show : bool, optional Display logs itershow : list, optional Display set log for the first N1 steps, last N2 steps, and every N3 steps in between where N1, N2, N3 are the three element of the list. x : np.ndarray Estimated model of size $$[M \times 1]$$
finalize(show=False)[source]

Finalize solver

Parameters: show : bool, optional Display finalize log
solve(y, x0=None, niter=None, SOp=None, eps=0.1, alpha=None, eigsdict=None, tol=1e-10, threshkind='soft', perc=None, decay=None, monitorres=False, show=False, itershow=[10, 10, 10])[source]

Run entire solver

Parameters: y : np.ndarray Data of size $$[N \times 1]$$ x0: :obj:numpy.ndarray, optional Initial guess niter : int Number of iterations SOp : pylops.LinearOperator, optional Regularization operator (use when solving the analysis problem) eps : float, optional Sparsity damping alpha : float, optional Step size. To guarantee convergence, ensure $$\alpha \le 1/\lambda_\text{max}$$, where $$\lambda_\text{max}$$ is the largest eigenvalue of $$\mathbf{Op}^H\mathbf{Op}$$. If None, the maximum eigenvalue is estimated and the optimal step size is chosen as $$1/\lambda_\text{max}$$. If provided, the convergence criterion will not be checked internally. eigsdict : dict, optional Dictionary of parameters to be passed to pylops.LinearOperator.eigs method when computing the maximum eigenvalue tol : float, optional Tolerance. Stop iterations if difference between inverted model at subsequent iterations is smaller than tol threshkind : str, optional Kind of thresholding (‘hard’, ‘soft’, ‘half’, ‘hard-percentile’, ‘soft-percentile’, or ‘half-percentile’ - ‘soft’ used as default) perc : float, optional Percentile, as percentage of values to be kept by thresholding (to be provided when thresholding is soft-percentile or half-percentile) decay : numpy.ndarray, optional Decay factor to be applied to thresholding during iterations monitorres : bool, optional Monitor that residual is decreasing show : bool, optional Display logs itershow : list, optional Display set log for the first N1 steps, last N2 steps, and every N3 steps in between where N1, N2, N3 are the three element of the list. x : np.ndarray Estimated model of size $$[M \times 1]$$ niter : int Number of effective iterations cost : numpy.ndarray, optional History of cost function

## Examples using pylops.optimization.cls_sparsity.ISTA¶ 