Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 87 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 35 tok/s
GPT-5 High 38 tok/s Pro
GPT-4o 85 tok/s
GPT OSS 120B 468 tok/s Pro
Kimi K2 203 tok/s Pro
2000 character limit reached

Lensless computational imaging through deep learning (1702.08516v2)

Published 22 Feb 2017 in cs.CV and physics.optics

Abstract: Deep learning has been proven to yield reliably generalizable answers to numerous classification and decision tasks. Here, we demonstrate for the first time, to our knowledge, that deep neural networks (DNNs) can be trained to solve inverse problems in computational imaging. We experimentally demonstrate a lens-less imaging system where a DNN was trained to recover a phase object given a raw intensity image recorded some distance away.

Citations (502)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper demonstrates that deep neural networks can learn to invert diffraction patterns to reconstruct phase object images.
  • A convolutional ResNet architecture maps raw patterns captured by a CMOS camera to high-quality reconstructions.
  • Results reveal robust generalization across datasets, highlighting potential to bypass complex traditional inverse modeling.

Lensless Computational Imaging through Deep Learning

The paper "Lensless Computational Imaging through Deep Learning" investigates the potential of deep neural networks (DNNs) in solving inverse problems in computational imaging. The authors present a lensless imaging system that leverages DNNs to recover phase objects from raw intensity images, a task traditionally resolved through complex computational algorithms.

Introduction

Inverse problems in computational imaging entail the reconstruction of object properties from indirect measurements. Historically studied through mathematical frameworks such as Tikhonov regularization and Wiener deconvolution, these problems have seen a resurgence with advancements in convex optimization and sparse representations. Concomitantly, neural networks, particularly DNNs, have demonstrated efficacy in function approximation across complex tasks like object detection and image restoration. This paper explores whether DNNs can be trained to tackle inverse problems by learning the necessary mappings from data alone.

Methodology

The authors deploy a convolutional residual neural network (ResNet) architecture in a lensless optical setup. This system captures diffraction patterns of pure phase objects, differing from traditional imaging that relies on sparse object assumptions. The experimental design includes a spatial light modulator (SLM) producing phase modulations and a CMOS camera capturing diffraction patterns at various object-to-sensor distances.

Training involves datasets like Faces-LFW and ImageNet, with the network learning to invert diffraction patterns into recognizable images. The neural network's architecture comprises layers with convolutional and residual blocks, promoting efficient learning and generalization through mechanisms like skip connections.

Results

The DNN successfully reconstructs images across diverse datasets. Remarkably, networks trained on specific datasets (e.g., faces) extrapolate to reconstruct entirely different classes (e.g., natural objects). These results suggest that the DNNs have learned a model encapsulating the system's physics, rather than merely memorizing training examples.

Quantitative analysis reveals the network's robustness to moderate perturbations in sensor positioning and invariance to lateral shifts and rotations. However, significant deviations lead to performance degradation, indicating the limits of the network's generalization.

Discussion

The findings indicate that DNNs can effectively solve inverse problems by learning directly from empirical data, circumventing the need to precisely define the forward model. This capability has practical implications for imaging systems where precise modeling is challenging, such as those involving complex optics.

Future Implications

Future research may extend this approach to more complex imaging scenarios, such as incorporating attenuation effects or microscopy setups. A promising avenue is the training of networks on physical objects, enhancing applicability in real-world environments.

In summary, this paper demonstrates the potential of DNNs in computational imaging, particularly in handling inverse problems without predefined models. The implications for AI in imaging suggest a transformative potential in fields requiring detailed object reconstruction from limited or indirect data.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube