Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 60 tok/s Pro
GPT-5 Medium 28 tok/s
GPT-5 High 34 tok/s Pro
GPT-4o 72 tok/s
GPT OSS 120B 441 tok/s Pro
Kimi K2 200 tok/s Pro
2000 character limit reached

Stochastic Reconstruction Technique

Updated 24 August 2025
  • Stochastic reconstruction technique is a computational methodology that employs random multiresolution sketching to solve complex inverse imaging problems.
  • It integrates stochastic optimization, variance reduction, and saddle-point reformulation to reduce computational cost and ensure convergence in high-dimensional settings.
  • Empirical results in computed tomography validate the method's ability to lower per-iteration cost while achieving linear convergence rates compared to conventional solvers.

Stochastic reconstruction techniques constitute a diverse class of computational methodologies that leverage randomness to solve complex inverse problems, particularly where direct, high-resolution or complete-data solutions are computationally prohibitive or fundamentally underdetermined. Recent advances have integrated stochastic optimization, random sketching, and multiresolution analysis to accelerate regularized iterative solvers in large-scale imaging scenarios, exemplified by computed tomography (CT). The ImaSk algorithm ("Image Sketching"), as presented in (Perelli et al., 13 Dec 2024), strategically combines these principles using random multiresolution image-domain operators to reduce per-iteration cost and maintain convergence guarantees for high-dimensional, regularized reconstruction tasks.

1. Image-Domain Sketching and Multiresolution Operators

ImaSk is predicated on the concept of randomly projecting the original optimization variable (the high-dimensional image) into lower-dimensional subspaces via a set of multiresolution "sketch operators" {S1,S2,...,Sr}\{S_1, S_2, ..., S_r\}. These operators are designed such that their expectation equals the identity: i=1rpiSi=I,\sum_{i=1}^r p_i S_i = I, where pip_i is the probability of selecting each sketch operator at a given iteration. Each SiS_i typically arises via block-averaging or other downsampling schemes that map the image to a lower resolution, enabling substantial reductions in the computational complexity of applying the forward operator KK (e.g., the Radon transform in CT).

The stochastic process is implemented by selecting, at each solver iteration, a sketch operator SikS_{i_k} (with probability pikp_{i_k}) and computing using Kik=KSikK_{i_k} = K S_{i_k} and its adjoint. This provides an unbiased but noisy estimate of the full forward model, ensuring that iterates are, in expectation, consistent with the original high-resolution problem.

2. Saddle-Point Reformulation for Stochastic Variance-Reduced Primal-Dual Updates

To accommodate the stochastic, nonseparable structure introduced by random sketching, the original regularized least-squares problem,

minx  12Kxb2+R(x),\min_x \; \tfrac{1}{2}\|K x - b\|^2 + R(x),

is reformulated as a convex-concave saddle-point problem: minxmaxy(i=1ry,Kixf(y)+R(x)),\min_{x} \max_{y} \Biggl( \sum_{i=1}^r \langle y, K_i x \rangle - f^*(y) + R(x) \Biggr), where f(y)=12yb2f(y) = \tfrac{1}{2}\|y - b\|^2 and ff^* is its convex conjugate. Defining A=(K1,...,Kr)A = (K_1, ..., K_r), the problem structure naturally admits the application of stochastic variance-reduced primal-dual updates—specifically, SAGA-type memorization of gradient components—operating on randomly selected multiresolution sketches.

The algorithm maintains a table of "memory variables" for each sketch, updating only the selected entry at each iteration while averaging the contribution from all sketches to preserve unbiasedness. This architecture allows efficient stochastic gradient estimation with desirable variance-reduction properties.

3. Image Sketching Updates and Theoretical Guarantees

The primal and dual variables are updated as follows:

  • Let iki_k be the randomly chosen sketch at iteration kk (according to probabilities pip_i).
  • Update the adjoint memory: ϕi(k+1)=KiTy(k)\phi_{i}^{(k+1)} = K_i^{T} y^{(k)} if i=iki = i_k; else, retain previous value.
  • Assemble the stochastic gradient for the primal update:

ξ(k)=ϕik(k+1)ϕik(k)+i=1rpiϕi(k)\xi^{(k)} = \phi_{i_k}^{(k+1)} - \phi_{i_k}^{(k)} + \sum_{i=1}^r p_i \phi_i^{(k)}

  • Update xx using the proximal operator of RR and a preselected stepsize.
  • Similarly, update the dual variable yy.

The paper affirms that in the case of a linear forward model KK and strongly convex regularizer RR, the algorithm converges linearly toward the global optimum. If μ\mu is the strong convexity constant and σ\sigma the stepsize, then for suitably chosen parameters,

E[xKx2]CθK,\mathbb{E}[ \| x^K - x^* \|^2 ] \leq C \cdot \theta^K,

with θ<1\theta < 1 determined by the precise choice of sketch operators and probabilities, and the correlations among the multiresolution operators.

4. Numerical Results in Computed Tomography

The effectiveness of ImaSk is validated numerically on CT image reconstruction tasks. Using a set of downsampling operators TiT_i to define Si=TiTiS_i = T_i^\top T_i, the forward operator KK is efficiently applied at multiple resolutions (e.g., grid sizes 256×256256 \times 256 vs. 512×512512 \times 512). Key empirical observations include:

  • Computational cost per iteration decreases commensurately with reduced image resolution.
  • Increasing the number of available resolutions (rr) accelerates convergence in total wall time, and the computational advantage scales according to the complexity of each KiK_i.
  • Relative error and PSNR curves plotted vs. "full matrix multiplication" equivalents confirm a substantial time savings as rr increases.
  • The aggregation property ipiSi=I\sum_i p_i S_i = I ensures unbiasing of the reconstruction despite per-iteration information loss due to sketching.

5. Comparison to Other Stochastic Inverse Solvers

Unlike data-domain stochastic or subset methods (e.g., batch SGD, SAGA on measurement indices), ImaSk applies randomness in the image domain. Key comparisons include:

  • Lower per-iteration computational cost, as downsampling directly reduces the complexity of the linear projection operations (KixK_i x).
  • Built-in variance reduction and linear convergence rates analogous to SAGA, thanks to the memory-augmented stochastic updates.
  • Flexibility in trading off accuracy and computation by tuning the set of sketch operators and their selection probabilities.

However, the theoretical guarantees rest on strong convexity assumptions and linear forward models, though empirical evidence with non-strongly convex penalties (like TV) shows favorable performance.

6. Generalizability and Applications

The ImaSk paradigm is not limited to CT but generalizes to any inverse problem with a linear or mildly nonlinear forward model, where forward-map evaluations are expensive:

  • PET, MRI, and other large-scale tomographic modalities.
  • Inverse problems in remote sensing, industrial NDT, or neuroimaging with large spatial domains.
  • Problems requiring regularized optimization with computational constraints, particularly those where a hierarchy of resolutions can be naturally defined.

The stochastic multiresolution update strategy is particularly amenable to parallel and distributed implementations and may be further extended to hybrid approaches (e.g., combining measurement- and image-domain sketching).

7. Future Directions and Potential Extensions

Several extensions of the ImaSk approach are indicated:

  • Adapting the saddle-point and variance-reduction structure to nonlinear or nonconvex regularization, including deep learning–based image priors.
  • Hybrid schemes that also incorporate stochastic sampling in the data domain.
  • Adaptive multiresolution schemes, where the resolution hierarchy and probabilities are adjusted online in response to task-specific criteria.
  • Application to problems where model evaluations (e.g., forward PDE solutions) dominate cost, using reduced-order or physics-informed surrogates as sketch operators.

The ImaSk stochastic reconstruction technique, by integrating randomized multiresolution analysis with rigorous optimization theory, contributes a scalable, theoretically principled, and empirically validated solution for large-scale inverse imaging and related high-dimensional reconstruction problems (Perelli et al., 13 Dec 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)