Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning (1901.11252v2)

Published 31 Jan 2019 in cs.CV, cs.LG, physics.app-ph, and physics.optics

Abstract: Three-dimensional (3D) fluorescence microscopy in general requires axial scanning to capture images of a sample at different planes. Here we demonstrate that a deep convolutional neural network can be trained to virtually refocus a 2D fluorescence image onto user-defined 3D surfaces within the sample volume. With this data-driven computational microscopy framework, we imaged the neuron activity of a Caenorhabditis elegans worm in 3D using a time-sequence of fluorescence images acquired at a single focal plane, digitally increasing the depth-of-field of the microscope by 20-fold without any axial scanning, additional hardware, or a trade-off of imaging resolution or speed. Furthermore, we demonstrate that this learning-based approach can correct for sample drift, tilt, and other image aberrations, all digitally performed after the acquisition of a single fluorescence image. This unique framework also cross-connects different imaging modalities to each other, enabling 3D refocusing of a single wide-field fluorescence image to match confocal microscopy images acquired at different sample planes. This deep learning-based 3D image refocusing method might be transformative for imaging and tracking of 3D biological samples, especially over extended periods of time, mitigating photo-toxicity, sample drift, aberration and defocusing related challenges associated with standard 3D fluorescence microscopy techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yichen Wu (32 papers)
  2. Yair Rivenson (41 papers)
  3. Hongda Wang (13 papers)
  4. Yilin Luo (11 papers)
  5. Eyal Ben-David (15 papers)
  6. Laurent A. Bentolila (1 paper)
  7. Christian Pritz (1 paper)
  8. Aydogan Ozcan (125 papers)
Citations (180)

Summary

  • The paper introduces Deep-Z, a GAN-based method that digitally refocuses 2D fluorescence images into 3D volumes, extending DOF by 20-fold.
  • It bypasses mechanical scanning by appending a digital propagation matrix, enabling rapid imaging and correction of axial aberrations.
  • Experimental validation on C. elegans neurons confirms Deep-Z's accuracy in replicating scanned images while reducing phototoxic effects.

Deep Learning for Enhanced 3D Fluorescence Microscopy

The paper "Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning" presents a novel approach using deep convolutional neural networks to address prevalent challenges in three-dimensional fluorescence microscopy, specifically the need for axial scanning in image acquisition. This innovative computational microscopy framework leverages deep learning, specifically a conditional generative adversarial network (GAN), to virtually refocus a 2D fluorescence image onto user-defined 3D surfaces within the sample volume. The proposed method, termed Deep-Z, facilitates rapid and efficient imaging by increasing the depth-of-field (DOF) by 20-fold without compromising imaging resolution or speed.

Methodology and Results

Deep-Z operates by appending a digital propagation matrix (DPM) to a single fluorescence image, allowing for digital refocusing without mechanical scanning or additional optical components. During training, the GAN learns the axial refocusing distance from matched pairs of images at various depths, thereby enabling it to generate refocused images corresponding to user-defined planes within the sample. This data-driven approach bypasses the need for a physical model of the imaging system and corrects image aberrations such as sample drift, tilt, and defocusing.

Experimental validation with Caenorhabditis elegans neurons demonstrated Deep-Z's capability to extend the native DOF of standard objective lenses significantly, achieving a DOF extension of ±10 µm while maintaining a good match with mechanically scanned images. Furthermore, Deep-Z's ability to digitally reconstruct images of dynamic biological samples emphasizes its potential for non-invasive long-term imaging, which is crucial in reducing phototoxicity and photobleaching.

Implications and Applications

The implications of Deep-Z are profound, as it transforms traditional 3D fluorescence microscopy by eliminating scanning requirements, thereby substantially boosting imaging throughput. Its application can span across various incoherent imaging modalities and bridge coherent and incoherent microscopy techniques, allowing for rapid transformation between different 3D surfaces within a fluorescent sample volume. Moreover, Deep-Z's non-uniform DPM enables digital correction of optical aberrations post-acquisition, enhancing imaging reliability and precision.

The development of Deep-Z+ further illustrates the capability to perform cross-modality digital refocusing, aligning output images from wide-field fluorescence microscopy with confocal microscopy standards, hence merging optical sectioning capabilities with digital refocusing. This capability, shown through imaging microtubule structures, underscores its potential in enhancing microscopic imaging modalities, improving clarity and structural details without additional hardware complexities.

Future Directions

While Deep-Z represents a significant advance in the field of fluorescence microscopy, future research can focus on expanding its theoretical and practical applications, particularly in bioimaging. Exploring integration with adaptive optics or developing more robust models to manage diverse sample conditions could benefit complex biological studies requiring prolonged observation and high-resolution imaging. Additionally, further enhancements in axial range through PSF engineering or the application of transfer learning to reduce training times across different imaging systems could optimize its usability and scalability.

Overall, the paper establishes a strong foundation for advancing computational fluorescence microscopy, providing a versatile tool that addresses longstanding limitations within the field, and paving the way for future innovations in 3D imaging technologies.