Iterative Thresholding with Inversion Algorithm
- ITI is a sparse signal recovery method that employs iterative hard-thresholding integrated with inversion and null-space tuning for enhanced performance.
- The algorithm leverages an eigenvalue-based suboptimal feedback approach to significantly reduce computational complexity while maintaining high accuracy.
- Empirical studies show that ITI achieves comparable NMSE to projection-based methods with dramatically lower runtime in large-scale compressive sensing settings.
Iterative Thresholding with Inversion (ITI) is a framework for sparse signal recovery from undersampled linear measurements. The method is fundamentally characterized by the incorporation of an inversion step—either exact or suboptimal—within an iterative hard-thresholding process, coupled with additional null-space correction. ITI, also referred to as Null-Space Tuning with Hard Thresholding and Feedback (NST+HT+FB), is notable for achieving recovery accuracy comparable to projection-based algorithms while offering significantly improved computational scalability, particularly when leveraging an eigenvalue-based suboptimal feedback strategy. This makes ITI applicable to large-scale compressive sensing and related inverse problems (Song et al., 2017).
1. Mathematical Formulation and Algorithmic Structure
The ITI algorithm addresses the canonical sparse recovery model:
where %%%%1%%%% is -sparse, () is the measurement matrix, are the measurements, and is noise. Sparsity is enforced by a hard thresholding operator , which retains only the largest (in magnitude) entries of a vector.
Each ITI iteration proceeds as follows:
- Feedback Step: Compute a proxy
- Hard Thresholding: Compute sparse proxy
- Feasibility Tuning: Project back to the feasible set using the orthogonal projector onto the null space of ,
Alternatively, a merged update is available:
These iterations ensure that recovered vectors gradually approach sparsity and measurement consistency as defined by . The null-space tuning operator is given by
which maps arbitrary updates back to the feasible set (Song et al., 2017).
2. Suboptimal Feedback and Eigenvalue-Based Approximation
In standard ITI or NST+HT+FB, support-based inversion operations such as inversion of (where indexes the active support) are required, incurring computational cost per iteration. For large-scale or high-sparsity settings, this cost is prohibitive.
To reduce complexity, the NST+HT+subOptFB variant replaces the explicit inversion with an eigenvalue-based scalar approximation. Specifically, under the restricted isometry property (RIP) and preconditioned RIP (P-RIP), the Gram matrix has eigenvalues in , with being an appropriate RIP constant. Its inverse is replaced by , where is chosen to control contraction.
The update becomes:
- Identify as the largest-magnitude indices of .
- For entries in , set
and .
- Update as before:
This version preserves the key properties of ITI while reducing per-iteration complexity from to (Song et al., 2017).
3. Convergence Guarantees
The convergence analysis employs the preconditioned restricted isometry property (P-RIP). For every -sparse vector ,
where is the P-RIP constant.
Theoretical results show that for with -sparse and , if , and is sufficiently large (dependent on and related constants), then the iterates satisfy
with explicit forms for and provided in (Song et al., 2017).
In the special case where is a Parseval frame (), one may guarantee by choosing whenever .
4. Computational Complexity
The computational cost per iteration is summarized as follows:
| Algorithm Variant | Per-Iteration Complexity | Dominant Costs |
|---|---|---|
| NST+HT+FB (exact feedback) | Gram matrix inversion on active support () | |
| NST+HT+subOptFB (suboptimal) | Matrix-vector multiply (); thresholding () | |
| HTP (Hard-Threshold Pursuit) | Inversion of matrix |
The eigenvalue-based suboptimal feedback approach eliminates the per-iteration bottleneck, making the method scalable to problem sizes with and large . The complexity is ultimately controlled by the cost of matrix-vector multiplication (Song et al., 2017).
5. Adaptive Iterative Thresholding Without Sparsity Knowledge
The adaptive algorithm (AdptNST+HT+subOptFB) addresses scenarios where the true sparsity level is unknown. The procedure incrementally increases the support size at each iteration. At iteration :
- Form proxy .
- Let be the indices of the largest magnitudes of .
- Set , others zero.
- Null-space tune: .
Convergence holds under analogous P-RIP conditions, with geometric error decay after support recovery in iterations, where is the true sparsity level (Song et al., 2017).
6. Empirical Performance and Comparative Analysis
Extensive simulations reported in (Song et al., 2017) include problem sizes up to , measurement ratios up to $0.8$, and sparsity ratios up to $0.4$. The ITI variants are benchmarked against:
- NST+HT+FB (exact inversion)
- NST+HT+subOptFB (suboptimal, eigenvalue-based feedback)
- HTP (Hard-Thresholding Pursuit)
- GHTP (graded HTP)
- Adaptive versions: AdptNST+HT+FB, AdptNST+HT+subOptFB
Key findings include:
- The NMSE (normalized mean-square error) achieved by NST+HT+subOptFB is on par with exact feedback and HTP across a broad range of undersampling/sparsity regimes.
- The runtime of NST+HT+subOptFB grows linearly in and and is one order of magnitude faster than HTP and exact-feedback NST+HT+FB at . At , , , NST+HT+subOptFB is 3–5× faster than exact-feedback NST+HT+FB.
- The adaptive variant AdptNST+HT+subOptFB achieves comparable NMSE to its competitors but with the lowest per-iteration cost and overall runtime.
7. Significance and Applicability
The ITI framework with null-space tuning and eigenvalue-based suboptimal feedback is distinguished by its combination of rapid geometric convergence and low per-iteration complexity. Particularly under mild P-RIP conditions, this makes ITI a compelling choice for sparse recovery in very large-scale systems, such as large-dimensional compressive sensing, where conventional inversion-based schemes become infeasible (Song et al., 2017).