Stochastic Reconstruction Technique
- Stochastic reconstruction technique is a computational methodology that employs random multiresolution sketching to solve complex inverse imaging problems.
- It integrates stochastic optimization, variance reduction, and saddle-point reformulation to reduce computational cost and ensure convergence in high-dimensional settings.
- Empirical results in computed tomography validate the method's ability to lower per-iteration cost while achieving linear convergence rates compared to conventional solvers.
Stochastic reconstruction techniques constitute a diverse class of computational methodologies that leverage randomness to solve complex inverse problems, particularly where direct, high-resolution or complete-data solutions are computationally prohibitive or fundamentally underdetermined. Recent advances have integrated stochastic optimization, random sketching, and multiresolution analysis to accelerate regularized iterative solvers in large-scale imaging scenarios, exemplified by computed tomography (CT). The ImaSk algorithm ("Image Sketching"), as presented in (Perelli et al., 13 Dec 2024), strategically combines these principles using random multiresolution image-domain operators to reduce per-iteration cost and maintain convergence guarantees for high-dimensional, regularized reconstruction tasks.
1. Image-Domain Sketching and Multiresolution Operators
ImaSk is predicated on the concept of randomly projecting the original optimization variable (the high-dimensional image) into lower-dimensional subspaces via a set of multiresolution "sketch operators" . These operators are designed such that their expectation equals the identity: where is the probability of selecting each sketch operator at a given iteration. Each typically arises via block-averaging or other downsampling schemes that map the image to a lower resolution, enabling substantial reductions in the computational complexity of applying the forward operator (e.g., the Radon transform in CT).
The stochastic process is implemented by selecting, at each solver iteration, a sketch operator (with probability ) and computing using and its adjoint. This provides an unbiased but noisy estimate of the full forward model, ensuring that iterates are, in expectation, consistent with the original high-resolution problem.
2. Saddle-Point Reformulation for Stochastic Variance-Reduced Primal-Dual Updates
To accommodate the stochastic, nonseparable structure introduced by random sketching, the original regularized least-squares problem,
is reformulated as a convex-concave saddle-point problem: where and is its convex conjugate. Defining , the problem structure naturally admits the application of stochastic variance-reduced primal-dual updates—specifically, SAGA-type memorization of gradient components—operating on randomly selected multiresolution sketches.
The algorithm maintains a table of "memory variables" for each sketch, updating only the selected entry at each iteration while averaging the contribution from all sketches to preserve unbiasedness. This architecture allows efficient stochastic gradient estimation with desirable variance-reduction properties.
3. Image Sketching Updates and Theoretical Guarantees
The primal and dual variables are updated as follows:
- Let be the randomly chosen sketch at iteration (according to probabilities ).
- Update the adjoint memory: if ; else, retain previous value.
- Assemble the stochastic gradient for the primal update:
- Update using the proximal operator of and a preselected stepsize.
- Similarly, update the dual variable .
The paper affirms that in the case of a linear forward model and strongly convex regularizer , the algorithm converges linearly toward the global optimum. If is the strong convexity constant and the stepsize, then for suitably chosen parameters,
with determined by the precise choice of sketch operators and probabilities, and the correlations among the multiresolution operators.
4. Numerical Results in Computed Tomography
The effectiveness of ImaSk is validated numerically on CT image reconstruction tasks. Using a set of downsampling operators to define , the forward operator is efficiently applied at multiple resolutions (e.g., grid sizes vs. ). Key empirical observations include:
- Computational cost per iteration decreases commensurately with reduced image resolution.
- Increasing the number of available resolutions () accelerates convergence in total wall time, and the computational advantage scales according to the complexity of each .
- Relative error and PSNR curves plotted vs. "full matrix multiplication" equivalents confirm a substantial time savings as increases.
- The aggregation property ensures unbiasing of the reconstruction despite per-iteration information loss due to sketching.
5. Comparison to Other Stochastic Inverse Solvers
Unlike data-domain stochastic or subset methods (e.g., batch SGD, SAGA on measurement indices), ImaSk applies randomness in the image domain. Key comparisons include:
- Lower per-iteration computational cost, as downsampling directly reduces the complexity of the linear projection operations ().
- Built-in variance reduction and linear convergence rates analogous to SAGA, thanks to the memory-augmented stochastic updates.
- Flexibility in trading off accuracy and computation by tuning the set of sketch operators and their selection probabilities.
However, the theoretical guarantees rest on strong convexity assumptions and linear forward models, though empirical evidence with non-strongly convex penalties (like TV) shows favorable performance.
6. Generalizability and Applications
The ImaSk paradigm is not limited to CT but generalizes to any inverse problem with a linear or mildly nonlinear forward model, where forward-map evaluations are expensive:
- PET, MRI, and other large-scale tomographic modalities.
- Inverse problems in remote sensing, industrial NDT, or neuroimaging with large spatial domains.
- Problems requiring regularized optimization with computational constraints, particularly those where a hierarchy of resolutions can be naturally defined.
The stochastic multiresolution update strategy is particularly amenable to parallel and distributed implementations and may be further extended to hybrid approaches (e.g., combining measurement- and image-domain sketching).
7. Future Directions and Potential Extensions
Several extensions of the ImaSk approach are indicated:
- Adapting the saddle-point and variance-reduction structure to nonlinear or nonconvex regularization, including deep learning–based image priors.
- Hybrid schemes that also incorporate stochastic sampling in the data domain.
- Adaptive multiresolution schemes, where the resolution hierarchy and probabilities are adjusted online in response to task-specific criteria.
- Application to problems where model evaluations (e.g., forward PDE solutions) dominate cost, using reduced-order or physics-informed surrogates as sketch operators.
The ImaSk stochastic reconstruction technique, by integrating randomized multiresolution analysis with rigorous optimization theory, contributes a scalable, theoretically principled, and empirically validated solution for large-scale inverse imaging and related high-dimensional reconstruction problems (Perelli et al., 13 Dec 2024).