Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Noise-robust latent vector reconstruction in ptychography using deep generative models (2311.07580v3)

Published 18 Oct 2023 in eess.IV, physics.comp-ph, and physics.optics

Abstract: Computational imaging is increasingly vital for a broad spectrum of applications, ranging from biological to material sciences. This includes applications where the object is known and sufficiently sparse, allowing it to be described with a reduced number of parameters. When no explicit parameterization is available, a deep generative model can be trained to represent an object in a low-dimensional latent space. In this paper, we harness this dimensionality reduction capability of autoencoders to search for the object solution within the latent space rather than the object space. We demonstrate a novel approach to ptychographic image reconstruction by integrating a deep generative model obtained from a pre-trained autoencoder within an Automatic Differentiation Ptychography (ADP) framework. This approach enables the retrieval of objects from highly ill-posed diffraction patterns, offering an effective method for noise-robust latent vector reconstruction in ptychography. Moreover, the mapping into a low-dimensional latent space allows us to visualize the optimization landscape, which provides insight into the convexity and convergence behavior of the inverse problem. With this work, we aim to facilitate new applications for sparse computational imaging such as when low radiation doses or rapid reconstructions are essential.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. P. Perona and J. Malik, IEEE Transactions on Pattern Analysis and Machine Intelligence 12, 629 (1990).
  2. P. Thibault and M. Guizar-Sicairos, New J. Phys. 14, 063004 (2012).
  3. V. Katkovnik and J. Astola, J. Opt. Soc. Am. A Opt. Image Sci. Vis. 30, 367 (2013).
  4. V. Katkovnik and K. Egiazarian, Digital Signal Processing 63, 72 (2017).
  5. E. T. Reehorst and P. Schniter, IEEE transactions on computational imaging 5, 52 (2018).
  6. W. Hoppe, Acta Crystallogr. A 25, 495 (1969).
  7. R. Hegerl and W. Hoppe, Ber. Bunsenges. Phys. Chem. 74, 1148 (1970).
  8. J. Rodenburg and A. Maiden, in Springer Handbook of Microscopy, edited by P. W. Hawkes and J. C. H. Spence (Springer International Publishing, Cham, 2019) p. 2.
  9. J. M. Rodenburg and H. M. L. Faulkner, Appl. Phys. Lett. 85, 4795 (2004).
  10. A. M. Maiden and J. M. Rodenburg, Ultramicroscopy 109, 1256 (2009).
  11. P. Thibault and A. Menzel, Nature 494, 68 (2013).
  12. A. Chakrabarti, Advances in Neural Information Processing Systems 29 (2016).
  13. L. Boominathan, M. Maniparambil, H. Gupta, R. Baburajan,  and K. Mitra, “Phase retrieval for fourier ptychography under varying amount of measurements,”  (2018), arXiv:1805.03593 [cs.CV] .
  14. S. Barutcu, D. Gürsoy,  and A. K. Katsaggelos, “Compressive ptychography using deep image and generative priors,”  (2022), arXiv:2205.02397 [cs.CV] .
  15. Y. LeCun, http://yann. lecun. com/exdb/mnist/  (1998).
  16. D. Paganin, Coherent X-ray optics, 6 (Oxford University Press, USA, 2006).
  17. K. Matsushima and T. Shimobaba, Opt. Express 17, 19662 (2009).
  18. D. P. Mitchell and A. N. Netravali, ACM Siggraph Computer Graphics 22, 221 (1988).
  19. D. P. Kingma and J. Ba, arXiv preprint arXiv:1412.6980  (2014).
  20. L. Jing, J. Zbontar,  and Y. LeCun, “Implicit rank-minimizing autoencoder,”  (2020), arXiv:2010.00679 [cs.LG] .
  21. Y. Takagi and S. Nishimoto, bioRxiv  (2022), 10.1101/2022.11.18.517004, https://www.biorxiv.org/content/early/2022/11/21/2022.11.18.517004.full.pdf .
  22. R. Bridson, SIGGRAPH sketches 10, 1 (2007).
Citations (1)

Summary

We haven't generated a summary for this paper yet.