Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Solving ill-posed inverse problems using iterative deep neural networks (1704.04058v2)

Published 13 Apr 2017 in math.OC, cs.AI, math.FA, and math.NA

Abstract: We propose a partially learned approach for the solution of ill posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional. The method results in a gradient-like iterative scheme, where the "gradient" component is learned using a convolutional network that includes the gradients of the data discrepancy and regularizer as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against FBP and TV reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the TV reconstruction while being significantly faster, giving reconstructions of 512 x 512 volumes in about 0.4 seconds using a single GPU.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Jonas Adler (19 papers)
  2. Ozan Öktem (38 papers)
Citations (585)

Summary

  • The paper introduces a partially learned gradient-update scheme that combines iterative optimization with deep convolutional networks for ill-posed inverse problems.
  • It synergizes classical regularization methods with modern deep learning, achieving a 5.4 dB PSNR improvement over traditional total variation regularization.
  • The approach demonstrates significant computational efficiency by reducing reconstruction time to 0.4 seconds per 512x512 image on a single GPU.

Solving Ill-posed Inverse Problems Using Iterative Deep Neural Networks

This paper explores a partially learned approach to solving ill-posed inverse problems involving potentially non-linear forward operators. The methodology synergizes classical regularization theory with modern deep learning techniques. This synthesis manifests as a gradient-like iterative scheme where the "gradient" is learned via convolutional networks incorporating gradients of both the data discrepancy and the regularizer at each iteration.

Methodology

The authors propose an iterative framework that combines deep learning and traditional approaches to inverse problem solving. The method builds on the following core ideas:

  • Gradient-like Iterative Scheme: The approach refines the standard gradient descent by learning an update operator through a convolutional network. This network processes the gradients of data discrepancy and regularization terms, effectively adapting the optimization path.
  • Hybrid Regularization: Classical regularization methods are integrated into the learning framework. The method employs a learned update operator, incorporating regularization implicitly. Prior information about the inverse problem, such as operator structure and noise characteristics, is encoded within the model.

Numerical Results

The paper presents empirical evaluations on non-linear tomographic reconstruction using simulated datasets, including the Sheep-Logan phantom and a head CT. Significant improvements over filtered back-projection (FBP) and total variation (TV) regularization are reported. The learned approach achieves a 5.4 dB improvement in peak signal-to-noise ratio (PSNR) over TV regularization, while reducing computation time to approximately 0.4 seconds for 512x512 pixel images on a single GPU.

Implications

The partially learned gradient method addresses key challenges in inverse problem domains:

  • Computational Efficiency: The method offers reduced runtimes compared to classical iterative schemes while preserving high reconstruction quality. This is crucial for large-scale applications such as 3D medical imaging.
  • Parameter Optimization: By leveraging training data, regularization parameters are inherently optimized, circumventing the arduous task of manual tuning.
  • Nuisance Parameter Reconstruction: The framework can be extended to incorporate and jointly optimize nuisance parameters, enhancing reconstruction accuracy in complex scenarios.

Conclusions and Future Directions

The proposed method demonstrates how deep learning can be effectively merged with established mathematical frameworks to tackle challenging inverse problems. The results underscore the potential of learning strategies in optimizing computational and reconstruction aspects. Future work may delve into:

  • Extending the approach to three-dimensional problems and diverse forward models.
  • Integrating sophisticated regularizers to leverage additional domain-specific knowledge.
  • Exploring the application of this framework to a broader class of inverse problems, enhancing its general applicability and robustness.

In summary, this paper contributes a scalable and flexible strategy for tackling ill-posed inverse problems, shedding light on the power of combining deep learning with classical approaches.