Papers
Topics
Authors
Recent
2000 character limit reached

Iterative Thresholding with Inversion Algorithm

Updated 17 January 2026
  • ITI is a sparse signal recovery method that employs iterative hard-thresholding integrated with inversion and null-space tuning for enhanced performance.
  • The algorithm leverages an eigenvalue-based suboptimal feedback approach to significantly reduce computational complexity while maintaining high accuracy.
  • Empirical studies show that ITI achieves comparable NMSE to projection-based methods with dramatically lower runtime in large-scale compressive sensing settings.

Iterative Thresholding with Inversion (ITI) is a framework for sparse signal recovery from undersampled linear measurements. The method is fundamentally characterized by the incorporation of an inversion step—either exact or suboptimal—within an iterative hard-thresholding process, coupled with additional null-space correction. ITI, also referred to as Null-Space Tuning with Hard Thresholding and Feedback (NST+HT+FB), is notable for achieving recovery accuracy comparable to projection-based algorithms while offering significantly improved computational scalability, particularly when leveraging an eigenvalue-based suboptimal feedback strategy. This makes ITI applicable to large-scale compressive sensing and related inverse problems (Song et al., 2017).

1. Mathematical Formulation and Algorithmic Structure

The ITI algorithm addresses the canonical sparse recovery model:

y=Ax+e,y = A x + e,

where %%%%1%%%% is kk-sparse, ARm×nA \in \mathbb{R}^{m \times n} (mnm \ll n) is the measurement matrix, yRmy \in \mathbb{R}^m are the measurements, and eRme \in \mathbb{R}^m is noise. Sparsity is enforced by a hard thresholding operator HkH_k, which retains only the kk largest (in magnitude) entries of a vector.

Each ITI iteration proceeds as follows:

  1. Feedback Step: Compute a proxy

g(t)=x(t)+A(AA)1(yAx(t)).g^{(t)} = x^{(t)} + A^\top (A A^\top)^{-1}(y - A x^{(t)}).

  1. Hard Thresholding: Compute sparse proxy

μ(t)=Hk(g(t)).\mu^{(t)} = H_k(g^{(t)}).

  1. Feasibility Tuning: Project back to the feasible set using the orthogonal projector onto the null space of AA,

x(t+1)=μ(t)+A(AA)1(yAμ(t)).x^{(t+1)} = \mu^{(t)} + A^\top (A A^\top)^{-1}(y - A \mu^{(t)}).

Alternatively, a merged update is available:

x(t+1)=Hk(x(t)+A(AA)1(yAx(t))).x^{(t+1)} = H_k\left(x^{(t)} + A^\top (A A^\top)^{-1}(y - A x^{(t)})\right).

These iterations ensure that recovered vectors gradually approach sparsity and measurement consistency as defined by Ax=yA x = y. The null-space tuning operator is given by

P=IA(AA)1A,P = I - A^\top(AA^\top)^{-1} A,

which maps arbitrary updates back to the feasible set (Song et al., 2017).

2. Suboptimal Feedback and Eigenvalue-Based Approximation

In standard ITI or NST+HT+FB, support-based inversion operations such as inversion of ATATA_T^\top A_T (where TT indexes the active support) are required, incurring O(k3)O(k^3) computational cost per iteration. For large-scale or high-sparsity settings, this cost is prohibitive.

To reduce complexity, the NST+HT+subOptFB variant replaces the explicit inversion with an eigenvalue-based scalar approximation. Specifically, under the restricted isometry property (RIP) and preconditioned RIP (P-RIP), the Gram matrix ATATA_T^\top A_T has eigenvalues in [1δ,1+δ][1-\delta, 1+\delta], with δ\delta being an appropriate RIP constant. Its inverse is replaced by λ1I\lambda^{-1} I, where λ[1δ,1+δ]\lambda \in [1 - \delta, 1 + \delta] is chosen to control contraction.

The update becomes:

  • Identify TtT_t as the kk largest-magnitude indices of x(t)x^{(t)}.
  • For entries in TtT_t, set

μTt(t)=xTt(t)+λ1ATt(Ax(t)ATtxTt(t)),\mu^{(t)}_{T_t} = x^{(t)}_{T_t} + \lambda^{-1} A_{T_t}^\top (A x^{(t)} - A_{T_t} x^{(t)}_{T_t}),

and μTtc(t)=0\mu^{(t)}_{T_t^c} = 0.

  • Update as before:

x(t+1)=μ(t)+A(AA)1(yAμ(t)).x^{(t+1)} = \mu^{(t)} + A^\top (A A^\top)^{-1}(y - A \mu^{(t)}).

This version preserves the key properties of ITI while reducing per-iteration complexity from O(mn+k3)O(mn + k^3) to O(mn+n)O(mn + n) (Song et al., 2017).

3. Convergence Guarantees

The convergence analysis employs the preconditioned restricted isometry property (P-RIP). For every ss-sparse vector uu,

(1γs)u22(AA)1/2Au22(1+γs)u22,(1 - \gamma_s) \|u\|_2^2 \leq \| (A A^\top)^{-1/2} A u \|_2^2 \leq (1 + \gamma_s) \|u\|_2^2,

where γs\gamma_s is the P-RIP constant.

Theoretical results show that for y=Ax+ηy = A x + \eta with xx ss-sparse and η2ϵ\|\eta\|_2 \leq \epsilon, if γ3s<2/4\gamma_{3s} < \sqrt{2}/4, and λ\lambda is sufficiently large (dependent on γ3s\gamma_{3s} and related constants), then the iterates μ(t)\mu^{(t)} satisfy

xμ(t)2ρ3stxμ(0)2+κ3s1ρ3st1ρ3sϵ,\|x - \mu^{(t)}\|_2 \leq \rho_{3s}^t \|x - \mu^{(0)}\|_2 + \kappa_{3s} \frac{1 - \rho_{3s}^t}{1 - \rho_{3s}} \cdot \epsilon,

with explicit forms for ρ3s<1\rho_{3s}<1 and κ3s\kappa_{3s} provided in (Song et al., 2017).

In the special case where AA is a Parseval frame (AA=IA A^\top = I), one may guarantee ρ<1\rho<1 by choosing λ>3.9\lambda > 3.9 whenever δ3s1/4\delta_{3s} \leq 1/4.

4. Computational Complexity

The computational cost per iteration is summarized as follows:

Algorithm Variant Per-Iteration Complexity Dominant Costs
NST+HT+FB (exact feedback) O(mn+k3)O(mn + k^3) Gram matrix inversion on active support (O(k3)O(k^3))
NST+HT+subOptFB (suboptimal) O(mn+n)O(mn + n) Matrix-vector multiply (O(mn)O(mn)); thresholding (O(n)O(n))
HTP (Hard-Threshold Pursuit) O(mn+k3)O(mn + k^3) Inversion of k×kk \times k matrix

The eigenvalue-based suboptimal feedback approach eliminates the per-iteration O(k3)O(k^3) bottleneck, making the method scalable to problem sizes with n105n \geq 10^5 and large kk. The complexity is ultimately controlled by the cost of matrix-vector multiplication (Song et al., 2017).

5. Adaptive Iterative Thresholding Without Sparsity Knowledge

The adaptive algorithm (AdptNST+HT+subOptFB) addresses scenarios where the true sparsity level kk is unknown. The procedure incrementally increases the support size at each iteration. At iteration tt:

  1. Form proxy g(t)=x(t)+λ1A(yAx(t))g^{(t)} = x^{(t)} + \lambda^{-1} A^\top(y - A x^{(t)}).
  2. Let TtT_t be the indices of the tt largest magnitudes of g(t)g^{(t)}.
  3. Set μTt(t)=gTt(t)\mu^{(t)}_{T_t} = g^{(t)}_{T_t}, others zero.
  4. Null-space tune: x(t+1)=μ(t)+A(AA)1(yAμ(t))x^{(t+1)} = \mu^{(t)} + A^\top (A A^\top)^{-1}(y - A \mu^{(t)}).

Convergence holds under analogous P-RIP conditions, with geometric error decay after support recovery in O(s)O(s) iterations, where ss is the true sparsity level (Song et al., 2017).

6. Empirical Performance and Comparative Analysis

Extensive simulations reported in (Song et al., 2017) include problem sizes up to n=105n=10^5, measurement ratios m/nm/n up to $0.8$, and sparsity ratios up to $0.4$. The ITI variants are benchmarked against:

  • NST+HT+FB (exact inversion)
  • NST+HT+subOptFB (suboptimal, eigenvalue-based feedback)
  • HTP (Hard-Thresholding Pursuit)
  • GHTP (graded HTP)
  • Adaptive versions: AdptNST+HT+FB, AdptNST+HT+subOptFB

Key findings include:

  • The NMSE (normalized mean-square error) achieved by NST+HT+subOptFB is on par with exact feedback and HTP across a broad range of undersampling/sparsity regimes.
  • The runtime of NST+HT+subOptFB grows linearly in nn and mm and is one order of magnitude faster than HTP and exact-feedback NST+HT+FB at n=105n=10^5. At n=105n=10^5, m/n=0.5m/n=0.5, s/m=0.3s/m=0.3, NST+HT+subOptFB is 3–5× faster than exact-feedback NST+HT+FB.
  • The adaptive variant AdptNST+HT+subOptFB achieves comparable NMSE to its competitors but with the lowest per-iteration cost and overall runtime.

7. Significance and Applicability

The ITI framework with null-space tuning and eigenvalue-based suboptimal feedback is distinguished by its combination of rapid geometric convergence and low per-iteration complexity. Particularly under mild P-RIP conditions, this makes ITI a compelling choice for sparse recovery in very large-scale systems, such as large-dimensional compressive sensing, where conventional inversion-based schemes become infeasible (Song et al., 2017).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Iterative Thresholding with Inversion (ITI) Algorithm.