Papers
Topics
Authors
Recent
2000 character limit reached

Prior-guided Diffusion Model for Cell Segmentation in Quantitative Phase Imaging (2405.06175v1)

Published 10 May 2024 in eess.IV and cs.CV

Abstract: Purpose: Quantitative phase imaging (QPI) is a label-free technique that provides high-contrast images of tissues and cells without the use of chemicals or dyes. Accurate semantic segmentation of cells in QPI is essential for various biomedical applications. While DM-based segmentation has demonstrated promising results, the requirement for multiple sampling steps reduces efficiency. This study aims to enhance DM-based segmentation by introducing prior-guided content information into the starting noise, thereby minimizing inefficiencies associated with multiple sampling. Approach: A prior-guided mechanism is introduced into DM-based segmentation, replacing randomly sampled starting noise with noise informed by content information. This mechanism utilizes another trained DM and DDIM inversion to incorporate content information from the to-be-segmented images into the starting noise. An evaluation method is also proposed to assess the quality of the starting noise, considering both content and distribution information. Results: Extensive experiments on various QPI datasets for cell segmentation showed that the proposed method achieved superior performance in DM-based segmentation with only a single sampling. Ablation studies and visual analysis further highlighted the significance of content priors in DM-based segmentation. Conclusion: The proposed method effectively leverages prior content information to improve DM-based segmentation, providing accurate results while reducing the need for multiple samplings. The findings emphasize the importance of integrating content priors into DM-based segmentation methods for optimal performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nature photonics 12(10), 578–589 (2018).
  2. M. Mir, B. Bhaduri, R. Wang, et al., “Quantitative phase imaging,” in Progress in optics, 57, 133–217, Elsevier (2012).
  3. T. Vicar, J. Balvan, J. Jaros, et al., “Cell segmentation methods for label-free contrast microscopy: review and comprehensive comparison,” BMC bioinformatics 20, 1–25 (2019).
  4. J. Park, B. Bai, D. Ryu, et al., “Artificial intelligence-enabled quantitative phase imaging methods for life sciences,” Nature Methods 20(11), 1645–1660 (2023).
  5. Y. Jo, H. Cho, S. Y. Lee, et al., “Quantitative phase imaging and artificial intelligence: a review,” IEEE Journal of Selected Topics in Quantum Electronics 25(1), 1–14 (2018).
  6. C. Hu, S. He, Y. J. Lee, et al., “Live-dead assay on unlabeled cells using phase imaging with computational specificity,” Nature communications 13(1), 713 (2022).
  7. C. Stringer, T. Wang, M. Michaelos, et al., “Cellpose: a generalist algorithm for cellular segmentation,” Nature methods 18(1), 100–106 (2021).
  8. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Germany, 2015, 234–241, Springer (2015).
  9. X.-X. Yin, L. Sun, Y. Fu, et al., “U-net-based medical image segmentation,” Journal of Healthcare Engineering 2022 (2022).
  10. F. Isensee, P. F. Jaeger, S. A. Kohl, et al., “nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature methods 18(2), 203–211 (2021).
  11. N. Siddique, S. Paheding, C. P. Elkin, et al., “U-net and its variants for medical image segmentation: A review of theory and applications,” Ieee Access 9, 82031–82057 (2021).
  12. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in Neural Information Processing Systems 33, 6840–6851 (2020).
  13. R. Rombach, A. Blattmann, D. Lorenz, et al., “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684–10695 (2022).
  14. L. Zbinden, L. Doorenbos, T. Pissas, et al., “Stochastic segmentation with conditional categorical diffusion models,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 1119–1129 (2023).
  15. M. Sun, W. Huang, and Y. Zheng, “Instance-aware diffusion model for gland segmentation in colon histology images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, 662–672, Springer (2023).
  16. T. Chen, C. Wang, and H. Shan, “Berdiff: Conditional bernoulli diffusion model for medical image segmentation,” arXiv preprint arXiv:2304.04429 (2023).
  17. J. Wolleb, R. Sandkühler, F. Bieder, et al., “Diffusion models for implicit image segmentation ensembles,” in International Conference on Medical Imaging with Deep Learning, 1336–1348, PMLR (2022).
  18. T. Amit, T. Shaharbany, E. Nachmani, et al., “Segdiff: Image segmentation with diffusion probabilistic models,” arXiv preprint arXiv:2112.00390 (2021).
  19. J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” arXiv preprint arXiv:2010.02502 (2020).
  20. T. Salimans and J. Ho, “Progressive distillation for fast sampling of diffusion models,” arXiv preprint arXiv:2202.00512 (2022).
  21. H. Zheng, W. Nie, A. Vahdat, et al., “Fast sampling of diffusion models via operator learning,” in International Conference on Machine Learning, 42390–42402, PMLR (2023).
  22. F.-A. Croitoru, V. Hondru, R. T. Ionescu, et al., “Diffusion models in vision: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
  23. Z. Shao, S. Sengupta, H. Li, et al., “Semi-supervised semantic segmentation of cell nuclei via diffusion-based large-scale pre-training and collaborative learning,” arXiv preprint arXiv:2308.04578 (2023).
  24. Z. Shao, L. Dai, Y. Wang, et al., “Augdiff: Diffusion based feature augmentation for multiple instance learning in whole slide image,” arXiv preprint arXiv:2303.06371 (2023).
  25. X. Su, J. Song, C. Meng, et al., “Dual diffusion implicit bridges for image-to-image translation,” arXiv preprint arXiv:2203.08382 (2022).
  26. N. Tumanyan, M. Geyer, S. Bagon, et al., “Plug-and-play diffusion features for text-driven image-to-image translation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1921–1930 (2023).
  27. D. Miyake, A. Iohara, Y. Saito, et al., “Negative-prompt inversion: Fast image inversion for editing with text-guided diffusion models,” arXiv preprint arXiv:2305.16807 (2023).
  28. K. He, X. Zhang, S. Ren, et al., “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778 (2016).
  29. M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, 6105–6114, PMLR (2019).
  30. L.-C. Chen, G. Papandreou, F. Schroff, et al., “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587 (2017).
  31. M. R. Zhang and J. Lucas, “Lookahead optimizer: k steps forward, 1 step back,” in International Conference on Learning Representations, (2019).
  32. C. H. Sudre, W. Li, T. Vercauteren, et al., “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings 3, 240–248, Springer (2017).
  33. M. Caron, H. Touvron, I. Misra, et al., “Emerging properties in self-supervised vision transformers,” in Proceedings of the International Conference on Computer Vision (ICCV), (2021).
  34. L. Ding and A. Goshtasby, “On the canny edge detector,” Pattern recognition 34(3), 721–725 (2001).
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.