RISP: Restarted Inertia & Score-Based Priors
- The paper introduces a novel framework (RISP) that combines momentum with explicit deep score-based priors to outperform standard RED methods.
- It employs a restart criterion to prevent divergence in non-convex settings, achieving a provable convergence rate of O(n^(-4/7)).
- Experimental results demonstrate RISP's efficiency in large-scale imaging tasks, yielding high-quality reconstructions with significantly reduced computation time.
Restarted Inertia with Score-based Priors (RISP) is a framework for solving ill-posed imaging and inverse problems that couples inertial acceleration techniques—specifically, momentum with restarts—with deep score-based image priors. In contrast to standard Regularization by Denoising (RED), which integrates a denoising operator as an implicit image prior, RISP introduces an explicit neural score prior and a principled restarting inertia mechanism. This combination enables provably faster convergence rates than RED while maintaining or improving reconstruction quality, and retains robustness in non-convex or large-scale settings (Renaud et al., 8 Oct 2025).
1. Foundations and Algorithmic Components
RISP builds on the RED paradigm, which regularizes the minimization of a data-fidelity objective by adding a denoising-based prior. Instead of using a fixed-point iteration or simple gradient-based method, RISP introduces an inertial term (momentum) to accelerate convergence, together with an explicit restarting mechanism to avoid divergence or excessive oscillation. The prior is encoded via a score function , typically parameterized by a deep neural network trained via score matching, so that for an implicit prior . The composite optimization problem is: where encodes data consistency and is the learned (possibly non-convex) image prior.
The canonical gradient-based RISP (RISP-GM) update is: with determining the share of momentum and the step size. An analogous RISP-Prox version incorporates the proximal mapping of with similar inertia and restart control.
The restart criterion is based on accumulated movement: if , where is a preset error budget, the momentum is reset. This prevents over-acceleration in non-convex energy landscapes and stabilizes convergence.
2. Convergence Rates and Theoretical Guarantees
RISP achieves a provably faster stationary-point convergence rate than RED, under regularity assumptions:
- has an -Lipschitz gradient and a -Lipschitz Hessian.
- The score function is sufficiently smooth.
The key bound for RISP-GM is: with , where is the number of iterations, and the smoothness constants, and . The convergence rate is an improvement over the of standard RED, and does not require convexity of the image prior (Renaud et al., 8 Oct 2025).
3. Score-Based Priors and Data-Driven Regularization
The RISP framework enhances classical RED by embedding a score-based prior, represented explicitly through a neural network . The network, trained by denoising score matching on large-scale image datasets, models the gradient of the log-probability of images under the true (possibly non-convex, multi-modal) data distribution. This approach generalizes beyond handcrafted regularizers (such as /TV/Sérsic) and is capable of capturing complex structures, high-frequency details, and non-Gaussian uncertainties (Adam et al., 2022, Kobler et al., 2023, Feng et al., 2023). The explicit score prior enables sampling as well as Maximum a Posteriori (MAP) estimation and can flexibly adapt to plugged-in denoisers or diffusion models.
4. Relation to RED and Prior Acceleration Schemes
Traditional RED methods utilize proximal or gradient iterations with denoiser-generated implicit gradients, achieving convergence. They are often accelerated heuristically, without theoretical guarantees—sometimes suffering from overshooting or non-monotonic progress, especially when the prior is non-convex. In contrast, RISP formalizes the use of momentum and restart, offering:
- An inertia-based scheme that is provably stable and fast even with non-convex priors.
- An analysis of both discrete and continuous-time dynamics, showing the connection to heavy-ball ODEs: .
- A mechanism to reset momentum based on an explicit bound, preventing divergence in complex energy landscapes (Renaud et al., 8 Oct 2025, Maulén et al., 12 Jun 2025).
5. Continuous-Time Interpretation and Dynamics
RISP is linked to heavy-ball dynamics via a continuous-time ODE: where is the damping parameter determined by the discrete step size and inertia weight. The restarting mechanism in discrete RISP corresponds to a stopping time in the ODE, and the convergence rate in continuous time is mirrored by the discrete analysis (Maulén et al., 12 Jun 2025, Renaud et al., 8 Oct 2025). This connection provides analytical insight into the trade-off between acceleration (momentum), stability (restarts), and prior enforcement.
6. Experimental Results Across Imaging Tasks
RISP has demonstrated significant acceleration and competitive or superior reconstruction quality in diverse imaging problems, both linear and nonlinear:
- Image deblurring (motion, Gaussian), inpainting ( missing pixels).
- Single-image super-resolution.
- Rician noise removal (non-convex data-fidelity).
- Large-scale optical tomography (inverse scattering at ).
In experiments, both RISP-GM and RISP-Prox converge several times faster than their RED counterparts, achieving the same PSNR or SSIM within a fraction of the iterations. For large-scale tasks, RISP reduces total computation time by an order of magnitude, while final reconstructions exhibit fine detail and accurate structure recovery. In highly non-convex settings, RISP maintains stability due to the restart policy.
7. Broader Implications, Applicability, and Perspectives
The RISP approach has far-reaching implications:
- It enables practical deployment of deep generative priors in real-time or large-scale inverse imaging due to much faster convergence.
- The framework is robust to non-convex and learned priors, generalizing to settings such as MRI, tomography, radar, and more general scientific imaging.
- This methodology establishes a blueprint for integrating advanced priors from diffusion models or plug-and-play denoisers with stable and provably fast optimization.
- The continuous-time dynamical perspective provides a foundation for the future design of adaptive restart rules, alternative discretizations, and combined learning/optimization pipelines.
RISP bridges the gap between high statistical expressivity (score-based priors) and accelerated optimization (momentum with provable restart), setting a new standard for modern inverse problem solvers (Renaud et al., 8 Oct 2025, Feng et al., 2023, Kobler et al., 2023, Feng et al., 2023, Sun et al., 2023).