Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wide-Field, High-Resolution Reconstruction in Computational Multi-Aperture Miniscope Using a Fourier Neural Network (2403.06439v2)

Published 11 Mar 2024 in physics.optics and eess.IV

Abstract: Traditional fluorescence microscopy is constrained by inherent trade-offs among resolution, field-of-view, and system complexity. To navigate these challenges, we introduce a simple and low-cost computational multi-aperture miniature microscope, utilizing a microlens array for single-shot wide-field, high-resolution imaging. Addressing the challenges posed by extensive view multiplexing and non-local, shift-variant aberrations in this device, we present SV-FourierNet, a novel multi-channel Fourier neural network. SV-FourierNet facilitates high-resolution image reconstruction across the entire imaging field through its learned global receptive field. We establish a close relationship between the physical spatially-varying point-spread functions and the network's learned effective receptive field. This ensures that SV-FourierNet has effectively encapsulated the spatially-varying aberrations in our system, and learned a physically meaningful function for image reconstruction. Training of SV-FourierNet is conducted entirely on a physics-based simulator. We showcase wide-field, high-resolution video reconstructions on colonies of freely moving C. elegans and imaging of a mouse brain section. Our computational multi-aperture miniature microscope, augmented with SV-FourierNet, represents a major advancement in computational microscopy and may find broad applications in biomedical research and other fields requiring compact microscopy solutions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. J. Park, D. J. Brady, G. Zheng, L. Tian, and L. Gao, “Review of bio-optical imaging systems with a high space-bandwidth product,” \JournalTitleAdvanced Photonics 3, 044001–044001 (2021).
  2. M. Harfouche, K. Kim, K. C. Zhou, P. C. Konda, S. Sharma, E. E. Thomson, C. Cooke, S. Xu, L. Kreiss, A. Chaware et al., “Imaging across multiple spatial scales with the multi-camera array microscope,” \JournalTitleOptica 10, 471–480 (2023).
  3. J. Son, B. Mandracchia, and S. Jia, “Miniaturized modular-array fluorescence microscopy,” \JournalTitleBiomedical Optics Express 11, 7221–7235 (2020).
  4. J. Fan, J. Suo, J. Wu, H. Xie, Y. Shen, F. Chen, G. Wang, L. Cao, G. Jin, Q. He et al., “Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution,” \JournalTitleNature Photonics 13, 809–816 (2019).
  5. Z. Fu, Z. Jin, C. Zhang, Z. He, Z. Zha, C. Hu, T. Gan, Q. Yan, P. Wang, and X. Ye, “The future of endoscopic navigation: A review of advanced endoscopic vision technology,” \JournalTitleIEEE Access 9, 41144–41167 (2021).
  6. Z. Göröcs and A. Ozcan, “On-chip biomedical imaging,” \JournalTitleIEEE reviews in biomedical engineering 6, 29–46 (2012).
  7. D. Aharoni, B. S. Khakh, A. J. Silva, and P. Golshani, “All the light that we can see: a new era in miniaturized microscopy,” \JournalTitleNature methods 16, 11–13 (2019).
  8. M. Martínez-Corral and B. Javidi, “Fundamentals of 3d imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” \JournalTitleAdvances in Optics and Photonics 10, 512–566 (2018).
  9. Y. Xue, I. G. Davison, D. A. Boas, and L. Tian, “Single-shot 3d wide-field fluorescence imaging with a computational miniature mesoscope,” \JournalTitleScience advances 6, eabb7508 (2020).
  10. Y. Xue, Q. Yang, G. Hu, K. Guo, and L. Tian, “Deep-learning-augmented computational miniature mesoscope,” \JournalTitleOptica 9, 1009–1021 (2022).
  11. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (tombo): concept and experimental verification,” \JournalTitleApplied optics 40, 1806–1813 (2001).
  12. J. Hu and W. Yang, “Metalens array miniaturized microscope for large-field-of-view imaging,” \JournalTitleOptics Communications 555, 130231 (2024).
  13. B. Xu, H. Li, S. Gao, X. Hua, C. Yang, C. Chen, F. Yan, S. Zhu, and T. Li, “Metalens-integrated compact imaging devices for wide-field microscopy,” \JournalTitleAdvanced Photonics 2, 066004–066004 (2020).
  14. L. Denis, E. Thiébaut, F. Soulez, J.-M. Becker, and R. Mourya, “Fast approximations of shift-variant blur,” \JournalTitleInternational Journal of Computer Vision 115, 253–278 (2015).
  15. V. Debarnot, P. Escande, T. Mangeat, and P. Weiss, “Learning low-dimensional models of microscopes,” \JournalTitleIEEE Transactions on Computational Imaging 7, 178–190 (2020).
  16. F. Sroubek, J. Kamenicky, and Y. M. Lu, “Decomposition of space-variant blur in image deconvolution,” \JournalTitleIEEE signal processing letters 23, 346–350 (2016).
  17. K. Yanny, K. Monakhova, R. W. Shuai, and L. Waller, “Deep learning for fast spatially varying deconvolution,” \JournalTitleOptica 9, 96–99 (2022).
  18. S. Fu, W. Shi, T. Luo, Y. He, L. Zhou, J. Yang, Z. Yang, J. Liu, X. Liu, Z. Guo et al., “Field-dependent deep learning enables high-throughput whole-cell 3d super-resolution imaging,” \JournalTitleNature Methods 20, 459–468 (2023).
  19. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), pp. 770–778.
  20. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2015), pp. 1–9.
  21. O. Rippel, J. Snoek, and R. P. Adams, “Spectral representations for convolutional neural networks,” \JournalTitleAdvances in neural information processing systems 28 (2015).
  22. H. Chen, L. Huang, T. Liu, and A. Ozcan, “Fourier imager network (fin): A deep neural network for hologram reconstruction with superior external generalization,” \JournalTitleLight: Science & Applications 11, 254 (2022).
  23. F. Tian and W. Yang, “Learned lensless 3d camera,” \JournalTitleOptics Express 30, 34479–34496 (2022).
  24. D. Deb, Z. Jiao, R. Sims, A. Chen, M. Broxton, M. B. Ahrens, K. Podgorski, and S. C. Turaga, “Fouriernets enable the design of highly non-local optical encoders for computational imaging,” \JournalTitleAdvances in Neural Information Processing Systems 35, 25224–25236 (2022).
  25. W. Luo, Y. Li, R. Urtasun, and R. Zemel, “Understanding the effective receptive field in deep convolutional neural networks,” \JournalTitleAdvances in neural information processing systems 29 (2016).
  26. K. Yanny, N. Antipa, W. Liberti, S. Dehaeck, K. Monakhova, F. L. Liu, K. Shen, R. Ng, and L. Waller, “Miniscope3d: optimized single-shot miniature 3d fluorescence microscopy,” \JournalTitleLight: Science & Applications 9, 171 (2020).
  27. F. F. Voigt, D. Kirschenbaum, E. Platonova, S. Pagès, R. A. Campbell, R. Kastli, M. Schaettin, L. Egolf, A. Van Der Bourg, P. Bethge et al., “The mesospim initiative: open-source light-sheet microscopes for imaging cleared tissue,” \JournalTitleNature methods 16, 1105–1108 (2019).
  28. M. I. Todorov, J. C. Paetzold, O. Schoppe, G. Tetteh, S. Shit, V. Efremov, K. Todorov-Völgyi, M. Düring, M. Dichgans, M. Piraud et al., “Machine learning analysis of whole mouse brain vasculature,” \JournalTitleNature methods 17, 442–449 (2020).
  29. J. Hartmann, M. Wong, E. Gallo, and D. Gilmour, “An image-based data-driven analysis of cellular architecture in a developing tissue,” \JournalTitleElife 9, e55913 (2020).
  30. T. Srikumar, M. C. Lewicki, M. Costanzo, J. M. Tkach, H. van Bakel, K. Tsui, E. S. Johnson, G. W. Brown, B. J. Andrews, C. Boone et al., “Global analysis of sumo chain function reveals multiple roles in chromatin regulation,” \JournalTitleJournal of Cell Biology 201, 145–163 (2013).
  31. J. L. Dahlin, B. K. Hua, B. E. Zucconi, S. D. Nelson Jr, S. Singh, A. E. Carpenter, J. H. Shrimp, E. Lima-Fernandes, M. J. Wawer, L. P. Chung et al., “Reference compounds for characterizing cellular injury in high-content cellular morphology assays,” \JournalTitleNature Communications 14, 1364 (2023).
  32. Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in Proceedings of the European conference on computer vision (ECCV), (2018), pp. 286–301.
  33. K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” \JournalTitlearXiv preprint arXiv:1312.6034 (2013).
  34. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, (2017), pp. 618–626.
  35. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” \JournalTitleOptica 5, 1–9 (2018).
  36. N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, and A. Courville, “On the spectral bias of neural networks,” in International conference on machine learning, (PMLR, 2019), pp. 5301–5310.
  37. X. Hua, W. Liu, and S. Jia, “High-resolution fourier light-field microscopy for volumetric multi-color live-cell imaging,” \JournalTitleOptica 8, 614–620 (2021).
  38. J. Alido, J. Greene, Y. Xue, G. Hu, M. Gilmore, K. J. Monk, B. T. DiBenedictis, I. G. Davison, L. Tian, and Y. Li, “Robust single-shot 3d fluorescence imaging in scattering media with a simulator-trained neural network,” \JournalTitleOptics Express 32, 6241–6257 (2024).
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com