Papers
Topics
Authors
Recent
2000 character limit reached

Deep ContourFlow: Advancing Active Contours with Deep Learning (2407.10696v1)

Published 15 Jul 2024 in cs.CV

Abstract: This paper introduces a novel approach that combines unsupervised active contour models with deep learning for robust and adaptive image segmentation. Indeed, traditional active contours, provide a flexible framework for contour evolution and learning offers the capacity to learn intricate features and patterns directly from raw data. Our proposed methodology leverages the strengths of both paradigms, presenting a framework for both unsupervised and one-shot approaches for image segmentation. It is capable of capturing complex object boundaries without the need for extensive labeled training data. This is particularly required in histology, a field facing a significant shortage of annotations due to the challenging and time-consuming nature of the annotation process. We illustrate and compare our results to state of the art methods on a histology dataset and show significant improvements.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. S. Peng, W. Jiang, H. Pi, X. Li, H. Bao, and X. Zhou, “Deep snake for real-time instance segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8530–8539, 2020, doi: 10.1109/CVPR42600.2020.00856.
  2. L. Castrejón, K. Kundu, R. Urtasun, and S. Fidler, “Annotating object instances with a polygon-RNN,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 4485–4493, 2017, doi: 10.1109/CVPR.2017.477.
  3. D. Acuna, H. Ling, A. Kar, and S. Fidler, “Efficient Interactive Annotation of Segmentation Datasets with Polygon-RNN++,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 859–868, 2018, doi: 10.1109/CVPR.2018.00096.
  4. H. Ali, S. Debleena and T. Demetri, “End-to-end trainable deep active contour models for automated image segmentation: Delineating buildings in aerial imagery,” in Computer Vision – ECCV 2020, Cham, 2020, pp. 730–746.
  5. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” Int. J. Comput. Vis., vol. 1, no. 4, pp. 321–331, 1988, doi: 10.1007/BF00133570.
  6. R. Goldenberg, R. Kimmel, E. Rivlin, and M. Rudzsky, “Fast geodesic active contours,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 1682, no. 1, pp. 34–45, 1999, doi: 10.1007/3-540-48236-9_4.
  7. C. Xu and J. L. Prince, “Generalized gradient vector flow external forces for active contours,” Signal Processing, vol. 71, no. 2, pp. 131–139, 1998, doi: 10.1016/s0165-1684(98)00140-6.
  8. T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans. Image Process., vol. 10, no. 2, pp. 266–277, 2001, doi: 10.1109/83.902291.
  9. C. Zimmer and J. C. Olivo-Marin, “Coupled parametric active contours,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 11, pp. 1838–1842, 2005, doi: 10.1109/TPAMI.2005.214.
  10. A. Dufour, V. Meas-Yedid, A. Grassart, and J. C. Olivo-Marin, “Automated quantification of cell endocytosis using active contours and wavelets,” Proc. Int. Conf. Pattern Recognit., pp. 25–28, 2008, doi: 10.1109/icpr.2008.4761748.
  11. Y. Y. Boykov, “Interactive Graph Cuts,” no. July, pp. 105–112, 2001.
  12. C. Rother, V. Kolmogorov, and A. Blake, “‘GrabCut,’” ACM Trans. Graph., vol. 23, no. 3, pp. 309–314, 2004, doi: 10.1145/1015706.1015720.
  13. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, vol. 25.
  14. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.
  15. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-December, pp. 2818–2826, 2016, doi: 10.1109/CVPR.2016.308.
  16. X. Zhou and N. L. Zhang, “Deep Clustering with Features from Self-Supervised Pretraining,” 2022, [Online]. Available: http://arxiv.org/abs/2207.13364.
  17. W. Kim, A. Kanezaki, and M. Tanaka, “Unsupervised Learning of Image Segmentation Based on Differentiable Feature Clustering,” IEEE Trans. Image Process., vol. 29, pp. 8055–8068, 2020, doi: 10.1109/TIP.2020.3011269.
  18. R. Abdal, P. Zhu, N. J. Mitra, and P. Wonka, “Labels4Free: Unsupervised Segmentation using StyleGAN,” Proc. IEEE Int. Conf. Comput. Vis., pp. 13950–13959, 2021, doi: 10.1109/ICCV48922.2021.01371.
  19. C. I. Bercea, B. Wiestler, and D. Rueckert, “SPA : Shape-Prior Variational Autoencoders for Unsupervised Brain Pathology Segmentation,” pp. 1–14, 2022.
  20. N. Catalano and M. Matteucci, “Few Shot Semantic Segmentation: a review of methodologies and open challenges,” J. ACM, vol. 1, no. 1, 2023, [Online]. Available: http://arxiv.org/abs/2304.05832.
  21. Z. Tian, H. Zhao, M. Shu, Z. Yang, R. Li, and J. Jia, “Prior Guided Feature Enrichment Network for Few-Shot Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 2, pp. 1050–1065, 2022, doi: 10.1109/TPAMI.2020.3013717.
  22. X. Zhang, Y. Wei, Y. Yang, and T. S. Huang, “SG-One: Similarity Guidance Network for one-shot Semantic Segmentation,” IEEE Trans. Cybern., vol. 50, no. 9, pp. 3855–3865, 2020, doi: 10.1109/TCYB.2020.2992433.
  23. N. Dong and E. P. Xing, “Few-shot semantic segmentation with prototype learning,” Br. Mach. Vis. Conf. 2018, BMVC 2018, pp. 1–13, 2019.
  24. C. Michaelis, I. Ustyuzhaninov, M. Bethge, and A. S. Ecker, “one-shot Instance Segmentation,” 2018, [Online]. Available: http://arxiv.org/abs/1811.11507.
  25. A. Shaban, S. Bansal, Z. Liu, I. Essa, and B. Boots, “One-shot learning for semantic segmentation,” Br. Mach. Vis. Conf. 2017, BMVC 2017, 2017, doi: 10.5244/c.31.167.
  26. K. Rakelly, E. Shelhamer, T. Darrell, A. Efros, and S. Levine, “Conditional networks for few-shot semantic segmentation,” 6th Int. Conf. Learn. Represent. ICLR 2018 - Work. Track Proc., no. 2017, pp. 2016–2019, 2018.
  27. O. Saha, Z. Cheng, and S. Maji, “GANORCON: Are Generative Models Useful for Few-shot Segmentation?,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 9981–9990, 2022, doi: 10.1109/CVPR52688.2022.00975.
  28. J. Deng, W. Dong, R. Socher, L.-J. Li, Kai Li, and Li Fei-Fei, “ImageNet: A large-scale hierarchical image database,” 2009 IEEE Conf. Comput. Vis. Pattern Recognit., pp. 248–255, 2010, doi: 10.1109/cvpr.2009.5206848.
  29. D. Mumford and J. Shah, “Optimal approximations by piecewise smooth functions and associated variational problems,” Commun. Pure Appl. Math., vol. 42, no. 5, pp. 577–685, 1989, doi: 10.1002/cpa.3160420503.
  30. J. L. Bentley and T. A. Ottmann, “Algorithms for Reporting and Counting Geometric Intersections,” IEEE Trans. Comput., vol. C–28, no. 9, pp. 643–647, 1979, doi: 10.1109/TC.1979.1675432.
  31. N. Codella, V. Rotemberg, P. Tschandl, M. Emre Celebi, S. Dusza, D. Gutman, B. Helba, A. Kalloo, K. Liopyris, M. Marchetti, H. Kittler, A. Halpern: “Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC)”, 2018; https://arxiv.org/abs/1902.03368
  32. G. Bueno, M. M. Fernandez-Carrobles, L. Gonzalez-Lopez, and O. Deniz, “Glomerulosclerosis identification in whole slide images using semantic segmentation,” Comput. Methods Programs Biomed., vol. 184, p. 105273, 2020, doi: 10.1016/j.cmpb.2019.105273.
  33. A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollar, “Panoptic segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 9396–9405, 2019, doi: 10.1109/CVPR.2019.00963.
  34. C. Zhang, G. Lin, F. Liu, R. Yao, and C. Shen, “CANET: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 5212–5221, 2019, doi: 10.1109/CVPR.2019.00536.
  35. A. Kirillov, E. Mintun, N. Ravi, S. Whitehead, A. C. Berg, and P. Doll, “Segment anything,” Proc. IEEE/CVF International Conference on Computer Vision (ICCV), October 2023, pp. 4015–4026, doi: 10.1109/iccv51070.2023.00371.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.