Papers
Topics
Authors
Recent
2000 character limit reached

Imaging Inverse Problems

Updated 7 January 2026
  • Imaging inverse problems are defined by the ill-posed recovery of target properties from incomplete or corrupted measurements, necessitating robust mathematical and computational techniques.
  • Classical and variational regularization methods balance data fidelity with prior assumptions using techniques like total variation and proximal algorithms for stable reconstructions.
  • Recent data-driven approaches, including plug-and-play frameworks and deep unrolled networks, enhance recovery quality by integrating learned priors and optimal experimental design.

Imaging inverse problems refer to the recovery of physical or semantic properties of a target object from indirect, often corrupted or incomplete, measurement data. These problems are central to numerous scientific and technological domains, including biomedical imaging, microscopy, astronomy, remote sensing, and geophysical exploration. The fundamental challenge is the loss of information and dramatic instability inherent in the measurement process, which typically makes naive inversion ill-posed: solutions may not exist, may not be unique, or may be arbitrarily sensitive to noise. Modern approaches combine mathematical modeling, regularization theory, and data-driven priors—ranging from sparsity and total variation to deep learning and probabilistic models—to produce stable, high-fidelity reconstructions with quantifiable guarantees.

1. Mathematical Formulation and Ill-Posedness

An imaging inverse problem is generally cast as the recovery of an unknown image or parameter vector xXx \in X from observed data yYy \in Y related via a (typically compact) forward operator A:XYA: X \to Y, often with additive noise: y=Ax+η,y = A x + \eta, where η\eta models measurement noise (Haltmeier et al., 2020). In several modalities, AA may be nonlinear, as in phase retrieval or full waveform inversion (FWI), or encode physics via PDEs (e.g., Maxwell, Helmholtz, diffusion equations) (Kocyigit et al., 2015, Feng et al., 2024, Caiafa et al., 2023).

Ill-posedness, as formulated by Hadamard, appears if any of the following fails:

  • Existence: For some yy, no solution xx exists.
  • Uniqueness: Multiple xx map to the same yy.
  • Stability: Small data perturbations in yy lead to large deviations in xx.

Examples include limited-angle CT (the range of AA is not closed), phase retrieval (modulus-only data), and underdetermined compressed sensing problems (more unknowns than observations) (Haltmeier et al., 2020, Birdi et al., 13 Nov 2025). Theoretical analysis reveals that in many settings uniqueness can only be assured by enriching the data (e.g., by diversity, as in phase-diverse measurements (Birdi et al., 13 Nov 2025), or full activation in Magnetorelaxometry Imaging (Föcke et al., 2018)).

2. Classical and Variational Regularization

To overcome ill-posedness, classical regularization seeks stable approximate solutions that balance fidelity to the data and prior assumptions on xx. The standard variational approach minimizes

x=argminx12Axy2+αR(x),x^* = \arg \min_{x} \frac{1}{2}\|A x - y\|^2 + \alpha R(x),

where RR is a regularizer encoding prior knowledge (e.g., total variation (TV), 1\ell_1-norm, Tikhonov term) and α>0\alpha>0 controls the regularization strength (Haltmeier et al., 2020, Föcke et al., 2018). Existence, uniqueness, and stability of solutions can be proved under convexity and coercivity of RR.

Optimality conditions and Euler-Lagrange equations characterize solutions, and classic proximal algorithms (Landweber, PGD, ADMM) are used for optimization. In many physical domains (MRXI, quantitative PAT), variational regularization proves critical to recover physical parameters in the presence of severe ill-posedness (Föcke et al., 2018, Kocyigit et al., 2015).

3. Data-Driven and Hybrid Regularization

Recent advances leverage data-driven regularizers, replacing or augmenting hand-crafted priors with learned models. There are several dominant paradigms:

  • Plug-and-Play (PnP) Methods: Iterative schemes where the proximal step for RR is replaced by a powerful image denoiser (e.g., BM3D, DnCNN, U-Net), even if RR is not explicitly defined. PnP-PGD and PnP-ADMM are widely used, and convergence can be assured if the denoiser is nonexpansive and locally homogeneous (Tan et al., 3 Sep 2025). RED (Regularization by Denoising), Tweedie-based scores, and diffusion models (score-based generative models) further advance this concept (Bendel et al., 29 Jan 2025, Hu et al., 2024).
  • Unrolled Optimization and Deep Networks: Deep nets “unroll” classic iterative algorithms, either by sharing parameters (e.g., Neumann Networks (Gilton et al., 2019)) or stacking blocks with learnable steps. U-Net or residual CNNs can be used as post-processing correctors (e.g., FBPConvNet (Jin et al., 2016)). The architecture often mirrors the physics of the problem to embed inductive bias (Wang et al., 2018).
  • Hybrid Models and Implicit Priors: Frameworks such as self-supervised learning embed the forward model directly in the loss, learning an inverse solver without ground-truth images (Senouf et al., 2019). Null-space and NETT frameworks guarantee data consistency and provide function-space convergence theory (Haltmeier et al., 2020).

Newer approaches employ invertible architectures (iResNets) to obtain reconstructions with provable regularization properties, even for nonlinear FF (Arndt et al., 2024). Generative variational models optimize over both image and latent variables without external training, ensuring existence and stability in function space (Habring et al., 2021).

4. Priors, Restoration Operators, and Score-Based Models

The nature of the prior or regularizer R(x)R(x) is foundational. Early work emphasized sparsity (wavelets, overcomplete dictionaries), where images are assumed to admit sparse representations. The pursuit of sparse codes or learned dictionaries is effective for both linear and nonlinear inverse imaging (e.g., microwave tomography) (Caiafa et al., 2023). Modern approaches generalize this perspective:

  • Restoration Operator Ensembles: Priors can be constructed via the scores of an ensemble of MMSE restoration operators (ShaRP), enabling the algorithm to handle structured artifacts and self-supervised training (Hu et al., 2024). Restoration-based stochastic priors frequently outperform Gaussian denoisers.
  • Diffusion Models: Pre-trained diffusion score models can be adapted via iterative conditional inference (FIRE, DDfire), achieving state-of-the-art accuracy and robust unsupervised inversion across super-resolution, inpainting, and phase retrieval (Bendel et al., 29 Jan 2025).
  • Latent-IPDE Regularization: Recent discoveries suggest that disparate inverse problems (FWI, CT, EM inversion) share a unified latent-space PDE structure, with solutions distinguished only by their linearly coupled initial conditions (Feng et al., 2024). This hidden property connects data and target property embeddings through a shared, identifiable wave equation, suggesting avenues for efficient cross-modal learning.
  • Convex Adversarial Priors: CLEAR constructs data-driven convex regularizers using adversarially trained input-convex neural networks with latent optimization. Under convexity and uniqueness assumptions, this yields provably unique and robust solutions on the data manifold, with empirical superiority over WGANs and unconstrained adversarial regularizers in MRI reconstruction (Wang et al., 2023).

5. Model Adaptation, Diversity, and Experimental Design

Practical imaging systems often encounter forward model drift or require robustness to new data distributions. Two main adaptation strategies have emerged:

  • Parametrize & Perturb (Fine-Tuning): When the forward model changes, retrain or fine-tune the network to minimize a data-consistency loss plus a proximity regularizer on the parameters (Gilton et al., 2020).
  • Reuse & Regularize (Plug-and-Play Regularization): Use a fixed inverse network as a denoising regularizer in an outer model-based optimization, ensuring transferability across drifted models without retraining (Gilton et al., 2020).
  • Learning and Augmenting with Phase Diversity: Data-augmented diversity, for example by generating phase-diverse pseudo-measurements with a trained UNet, mitigates ill-posedness without additional hardware in incoherent and coherent optical imaging. These pseudo-data, when injected into standard multi-measurement reconstructions (e.g., Wiener filter, phase retrieval), yield dramatic improvements in stability and fidelity (Birdi et al., 13 Nov 2025).
  • Optimal Experimental Design: MRXI and similar modalities benefit from mathematically guided selection of sensor locations, coil orientations, and activation scenarios to maximize identifiability and reduce conditioning (Föcke et al., 2018).

6. Applications, Performance, and Future Directions

Imaging inverse problem methodologies are widely applied:

  • Biomedical Imaging (MRI, CT, STEM, Magnetorelaxometry): Application-tuned priors and architectures deliver superior performance on accelerated MRI, low-dose CT, electron microscopy, and nanoparticle mapping, outperforming classical TV and supervised learning given sufficient prior expertise (Jin et al., 2016, Wang et al., 2018, Föcke et al., 2018, Wang et al., 2023).
  • Microwave Tomography, Photoacoustic & Acousto-Electric Tomography: Coupled-physics modalities combine PDE-based modeling with variational and deep learning regularization, sometimes enhanced by explicit CGO solution frameworks for uniqueness and stability (Kocyigit et al., 2015, Caiafa et al., 2023).
  • Self-Supervised and Semantic Imaging: Fully self-supervised solvers operate without reference images, and recent vision-LLMs support nonparametric hypothesis testing on semantic attributes of the reconstructed images, with provable control of Type I error (Senouf et al., 2019, Xi et al., 28 May 2025).

Key directions include unified latent modeling across physical problems (Feng et al., 2024), theoretically grounded stochastic priors (Hu et al., 2024), plug-and-play and diffusion-based sampling with precise convergence guarantees (Tan et al., 3 Sep 2025, Bendel et al., 29 Jan 2025), and physics-informed data augmentation (Birdi et al., 13 Nov 2025). Challenges remain in extending theoretical guarantees to non-convex and highly data-driven architectures, generalizing to nonparametric forward model drift, and integrating semantic or task-aware objectives at scale.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Imaging Inverse Problems.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube