Sparse Signal Reconstruction (SSR)
- Sparse signal reconstruction (SSR) is a framework for recovering signals with sparse or compressible representations from indirect, noisy, or compressed measurements using methods like convex relaxation, greedy pursuit, and Bayesian inference.
- SSR techniques address diverse measurement models—including linear, nonlinear, phase-only, and quantized setups—to enable robust support recovery even in the presence of corruption or incomplete data.
- The approach plays a critical role in compressive sensing and high-dimensional data analysis, with practical applications extending to hardware-efficient implementations and adaptive systems integrating side information.
Sparse signal reconstruction (SSR) encompasses the theoretical and algorithmic framework for recovering signals or objects whose underlying representations are sparse or compressible, based on indirect, compressed, or noisy measurements. SSR is central to compressive sensing, statistical model selection, and high-dimensional data analysis. Central to SSR is the recovery of a signal’s support—the indices of its nonzero components—which is often more robust and statistically significant than reconstruction of amplitudes. SSR methodologies span convex relaxation, greedy pursuit, Bayesian approaches, and hardware-efficient realizations; they are increasingly adapted to scenarios with corrupted, partial, quantized, or nonlinear measurements.
1. Formulations and Measurement Models
SSR settings formalize the recovery of a vector (typically or ) with from linear or nonlinear measurements: where () is a sensing or design matrix, and denotes noise. The canonical SSR problem is: which is NP-hard. Convex relaxation (basis pursuit), greedy pursuit (OMP, CoSaMP), and probabilistic methods (ECME, Bayesian inference) have been developed to make SSR tractable.
Measurement corruption or incompleteness shapes model variants: phase-only measurements (Liu et al., 2010), amplitude distortion, quantization, nonlinear observation channels (e.g., sign measurements), or adaptive side information.
2. Methods for SSR: Convex and Greedy Algorithms
SSR is approached by several major algorithmic classes:
Convex relaxation (Basis Pursuit):
The norm promotes sparsity and yields tractable convex programs. Recovery guarantees are based on the Restricted Isometry Property (RIP) and mutual coherence. In (Resetar et al., 2019), minimization is robust when support is unknown, but computationally expensive.
Greedy Pursuit Algorithms:
- Orthogonal Matching Pursuit (OMP) and Orthogonal Least Squares (OLS) select atoms one-by-one, updating least squares at each step.
- Gradient Pursuit (GP) refines directions via local descent.
- Iterative Hard Thresholding (IHT) alternates thresholding and gradient projection for -constrained least squares.
In (Resetar et al., 2019), greedy methods attain low mean squared error (MSE) with known , requiring measurements. IHT yields moderate accuracy but maximal speed.
Thresholding and Adaptive Filtering:
- ISD (Iterative Support Detection) (0909.4359) leverages iterative support estimation and truncated minimization, outperforming classical convex approaches with fast decaying signals.
- -LMS and -ZAP (Jin et al., 2013) adapt stochastic gradient techniques from adaptive filtering to SSR, employing zero-attraction continuous surrogates, with accelerated convergence and enhanced noise robustness.
Global Optimization and Nonconvex Methods:
- Rational and polynomial relaxations (Marmin et al., 2020) address SSR under nonlinear models, using Lasserre’s moment-SOS semidefinite hierarchy for piecewise rational penalties (e.g., SCAD/MCP), achieving certified global minimizers via large-scale semidefinite programming with complexity reductions through subsampling and symmetry.
3. Support Recovery with Partial or Corrupted Measurements
SSR often requires reconstructing the signal’s support in the presence of measurement corruption or incomplete information.
- Phase-only SSR (POSSR) (Liu et al., 2010): In cases where amplitudes are corrupted but phase information survives, POSSR constructs the SOCP: This convex formulation enables robust support recovery even when amplitudes are destroyed. Simulation results show that, for , , and , POSSR achieves support recovery up to (success rate SPSR), outperforming both naive BP and phase-only BP under amplitude corruption.
- 1-bit and Level Crossing SSR (Mashhadi et al., 2016): For analog/digital systems recording only sign or level-crossing events, SSR is cast as a signed/1-bit CS problem. BSL0 (binary smooth-) and BIHT (binary IHT) are adapted for these settings; multi-level crossing constraints are handled by stacking measurements and augmented matrix formulations.
4. SSR in High-Coherence and Structured Sensing Matrices
Coherent or structured (e.g., super-resolution, block/clustered dictionaries) challenges classical SSR guarantees. The Partial Inversion (PartInv) algorithm (Chen et al., 2013) targets highly coherent scenarios:
- At each iteration, support is updated by forming a least-squares problem on the top indices; inversion is carried out on the Gram matrix .
- Exact recovery is proved under spectral norm and cross-correlation controls, provided $|\supp(x)| = K \leq L < M$ and amplitude conditions are met.
- On block-correlated matrices or wavelet trees, PartInv achieves markedly larger recovery regions than convex or standard greedy algorithms.
5. Recovery with Side Information and Heterogeneous Priors
Access to auxiliary signals (“side information”, SI) is leveraged by adaptive multi-SI frameworks:
- RAMSIA and Weighted - Minimization (Luong et al., 2016, Luong et al., 2016): SSR is formulated as: with adaptive intra-/inter-SI weights. Two-level weighting ensures poor side information is dynamically suppressed (by decreasing and ), while combining multiple SIs of variable reliability. RAMSIA demonstrates that, for demanding tasks (e.g., multiview image histograms, ), using three SIs reduces required measurements for recovery by –$100$ samples compared to classical CS or single-SI approaches.
- Bayesian SSR with Structured Priors (Quadeer et al., 2012): SSR is approached from a full Bayesian perspective. Bernoulli-Gaussian, non-Gaussian, or empirical priors are combined with semi-orthogonal clustering of sensing matrices (e.g., partial DFT, Toeplitz). This leads to order-recursive updates in local clusters, reducing computational cost in low-sparsity regimes.
6. Hardware and Algorithmic Scalability
SSR methods have informed hardware realizations and efficient algorithmic implementations:
- Threshold and QR-based architectures (Orovic et al., 2015): Single-pass SSR with hardware-efficient QR factorization (using only upper-triangular and Givens rotations) avoids full storage or computation of the orthogonal , dramatically reducing flop counts and hardware complexity.
- QUBO-annealing approaches (Ide et al., 2022): Direct -regularized regression is mapped to quadratic unconstrained binary optimization; signal entries are quantized, and ancillary bits are factored into QUBO constraints solvable by quantum annealing hardware (D-Wave). Empirical tests indicate that QUBO-based SSR is competitive or slightly superior to OMP/LASSO under small-scale, noisy, or coarse quantization settings.
- Low-rank Hankel SSR for spectral signals (Cai et al., 2016): For signals modeled as sparse combinations of sinusoids, FIHT (Fast Iterative Hard Thresholding) leverages tangent-space projections to reduce per-iteration costs to ; recovery is guaranteed for measurements under incoherence. Extensions to multidimensional SSR and high-dimensional arrays (, ) are feasible.
7. Guarantees, Measurement Bounds, and Practical Guidance
Performance in SSR is governed by conditions on the sensing matrix (), sparsity (), and noise:
- Restricted Isometry Property (RIP) and coherence bounds quantify the accuracy and stability of recovery via or nonconvex methods.
- Measurement bounds with side information (Luong et al., 2016): on Gaussian , weighted - minimization achieves
where encodes effective sparsity after multi-SI weighting, improving substantially over classical CS. Adaptive RAMSI implementation ensures these theoretical gains translate in practice.
- Support recovery under phase-only or quantized measurements (Liu et al., 2010, Ide et al., 2022): POSSR and QUBO-based SSR regimes offer robust support identification in realistic channels subject to amplitude corruption or resolution constraints.
8. Significance, Limitations, and Open Directions
SSR is essential for modern high-dimensional inference and signal processing across wireless communications, tomography, imaging, and machine learning. Current limitations include scalability to ultra-high dimensions, uniform guarantees in highly correlated or structured dictionaries, and handling complex forms of measurement corruption (e.g., adversarial noise, quantization, nonlinear mappings). Ongoing research focuses on:
- Theoretical guarantees for nonconvex and phase-only regimes (e.g., phase-only RIP analysis (Liu et al., 2010))
- Fast algorithms for structured matrices and cluster-based priors
- Integration with machine learning for learned side-information or weights
- Hardware-oriented SSR under resource and power constraints
SSR methodologies continue to expand with advances in convex analysis, global optimization, quantum computing, and Bayesian inference, offering increasingly reliable and computationally feasible solutions to compressed and high-dimensional inference problems.