SegT: A Novel Separated Edge-guidance Transformer Network for Polyp Segmentation (2306.10773v1)
Abstract: Accurate segmentation of colonoscopic polyps is considered a fundamental step in medical image analysis and surgical interventions. Many recent studies have made improvements based on the encoder-decoder framework, which can effectively segment diverse polyps. Such improvements mainly aim to enhance local features by using global features and applying attention methods. However, relying only on the global information of the final encoder block can result in losing local regional features in the intermediate layer. In addition, determining the edges between benign regions and polyps could be a challenging task. To address the aforementioned issues, we propose a novel separated edge-guidance transformer (SegT) network that aims to build an effective polyp segmentation model. A transformer encoder that learns a more robust representation than existing CNN-based approaches was specifically applied. To determine the precise segmentation of polyps, we utilize a separated edge-guidance module consisting of separator and edge-guidance blocks. The separator block is a two-stream operator to highlight edges between the background and foreground, whereas the edge-guidance block lies behind both streams to strengthen the understanding of the edge. Lastly, an innovative cascade fusion module was used and fused the refined multi-level features. To evaluate the effectiveness of SegT, we conducted experiments with five challenging public datasets, and the proposed model achieved state-of-the-art performance.
- H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, “Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: a cancer journal for clinicians, vol. 71, no. 3, pp. 209–249, 2021.
- S. B. Ahn, D. S. Han, J. H. Bae, T. J. Byun, J. P. Kim, and C. S. Eun, “The miss rate for colorectal adenoma determined by quality-adjusted, back-to-back colonoscopies,” Gut and liver, vol. 6, no. 1, p. 64, 2012.
- C. M. Le Clercq, M. W. Bouwens, E. J. Rondagh, C. M. Bakker, E. T. Keulen, R. J. de Ridder, B. Winkens, A. A. Masclee, and S. Sanduleanu, “Postcolonoscopy colorectal cancers are preventable: a population-based study,” Gut, vol. 63, no. 6, pp. 957–963, 2014.
- Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
- P. Brandao, O. Zisimopoulos, E. Mazomenos, G. Ciuti, J. Bernal, M. Visentini-Scarzanella, A. Menciassi, P. Dario, A. Koulaouzidis, A. Arezzo, et al., “Towards a computed-aided diagnosis system in colonoscopy: automatic polyp segmentation using convolution neural networks,” Journal of Medical Robotics Research, vol. 3, no. 02, p. 1840002, 2018.
- F. Shen, X. Du, L. Zhang, and J. Tang, “Triplet contrastive learning for unsupervised vehicle re-identification,” arXiv preprint arXiv:2301.09498, 2023.
- D.-P. Fan, G.-P. Ji, T. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao, “Pranet: Parallel reverse attention network for polyp segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VI 23, pp. 263–273, Springer, 2020.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015.
- R. Zhang, G. Li, Z. Li, S. Cui, D. Qian, and Y. Yu, “Adaptive context selection for polyp segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VI 23, pp. 253–262, Springer, 2020.
- Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4, pp. 3–11, Springer, 2018.
- N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, pp. 213–229, Springer, 2020.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
- F. Shen, Y. Xie, J. Zhu, X. Zhu, and H. Zeng, “Git: Graph interactive transformer for vehicle re-identification,” IEEE Transactions on Image Processing, 2023.
- F. Shen, S. Xiangbo, X. Du, and J. Tang, “Pedestrian-specific bipartite-aware similarity learning for text-based person retrieval,” in Proceedings of the 31th ACM International Conference on Multimedia, 2023.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- D. Bo, W. Wenhai, F. Deng-Ping, L. Jinpeng, F. Huazhu, and S. Ling, “Polyp-pvt: Polyp segmentation with pyramidvision transformers,” 2023.
- D.-P. Fan, G.-P. Ji, M.-M. Cheng, and L. Shao, “Concealed object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 10, pp. 6024–6042, 2021.
- L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), pp. 801–818, 2018.
- F. Shen, J. Zhu, X. Zhu, Y. Xie, and J. Huang, “Exploring spatial significance via hybrid pyramidal graph network for vehicle re-identification,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 8793–8804, 2021.
- X. Li, H. Zhao, L. Han, Y. Tong, S. Tan, and K. Yang, “Gated fully fusion for semantic segmentation,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, pp. 11418–11425, 2020.
- F. Shen, J. Zhu, X. Zhu, J. Huang, H. Zeng, Z. Lei, and C. Cai, “An efficient multiresolution network for vehicle reidentification,” IEEE Internet of Things Journal, vol. 9, no. 11, pp. 9049–9059, 2021.
- T. Takikawa, D. Acuna, V. Jampani, and S. Fidler, “Gated-scnn: Gated shape cnns for semantic segmentation,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 5229–5238, 2019.
- M. Zhen, J. Wang, L. Zhou, S. Li, T. Shen, J. Shang, T. Fang, and L. Quan, “Joint semantic segmentation and boundary detection using iterative pyramid contexts,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13666–13675, 2020.
- A. Sánchez-González, B. García-Zapirain, D. Sierra-Sosa, and A. Elmaghraby, “Automatized colon polyp segmentation via contour region analysis,” Computers in biology and medicine, vol. 100, pp. 152–164, 2018.
- M. Li, M. Wei, X. He, and F. Shen, “Enhancing pary features via contrastive attention module for vehicle re-identification,” in Conference on International Conference on Image Processing, IEEE, 2022.
- P. N. Figueiredo, I. N. Figueiredo, L. Pinto, S. Kumar, Y.-H. R. Tsai, and A. V. Mamonov, “Polyp detection with computer-aided diagnosis in white light colonoscopy: comparison of three different methods,” Endoscopy International Open, vol. 7, no. 02, pp. E209–E215, 2019.
- F. Shen, X. Peng, L. Wang, X. Zhang, M. Shu, and Y. Wang, “Hsgm: A hierarchical similarity graph module for object re-identification,” in 2022 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6, IEEE, 2022.
- F. Shen, L. Lin, M. Wei, J. Liu, J. Zhu, H. Zeng, C. Cai, and L. Zheng, “A large benchmark for fabric image retrieval,” in 2019 IEEE 4th International Conference on Image, Vision and Computing (ICIVC), pp. 247–251, IEEE, 2019.
- M. Li, M. Wei, X. He, and F. Shen, “Enhancing part features via contrastive attention module for vehicle re-identification,” in 2022 IEEE International Conference on Image Processing (ICIP), pp. 1816–1820, IEEE, 2022.
- S. Chen, X. Tan, B. Wang, and X. Hu, “Reverse attention for salient object detection,” in Proceedings of the European conference on computer vision (ECCV), pp. 234–250, 2018.
- H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, “Training data-efficient image transformers & distillation through attention,” in International conference on machine learning, pp. 10347–10357, PMLR, 2021.
- Z. Pan, B. Zhuang, J. Liu, H. He, and J. Cai, “Scalable vision transformers with hierarchical pooling,” in Proceedings of the IEEE/cvf international conference on computer vision, pp. 377–386, 2021.
- K. Han, A. Xiao, E. Wu, J. Guo, C. Xu, and Y. Wang, “Transformer in transformer,” Advances in Neural Information Processing Systems, vol. 34, pp. 15908–15919, 2021.
- Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 10012–10022, 2021.
- W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pvt v2: Improved baselines with pyramid vision transformer,” Computational Visual Media, vol. 8, no. 3, pp. 415–424, 2022.
- E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “Segformer: Simple and efficient design for semantic segmentation with transformers,” Advances in Neural Information Processing Systems, vol. 34, pp. 12077–12090, 2021.
- J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, “Transunet: Transformers make strong encoders for medical image segmentation,” arXiv preprint arXiv:2102.04306, 2021.
- Y. Zhang, H. Liu, and Q. Hu, “Transfuse: Fusing transformers and cnns for medical image segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24, pp. 14–24, Springer, 2021.
- J. Schlemper, O. Oktay, M. Schaap, M. Heinrich, B. Kainz, B. Glocker, and D. Rueckert, “Attention gated networks: Learning to leverage salient regions in medical images,” Medical image analysis, vol. 53, pp. 197–207, 2019.
- Y. Lu, Y. Chen, D. Zhao, and J. Chen, “Graph-fcn for image semantic segmentation,” in Advances in Neural Networks–ISNN 2019: 16th International Symposium on Neural Networks, ISNN 2019, Moscow, Russia, July 10–12, 2019, Proceedings, Part I 16, pp. 97–105, Springer, 2019.
- G. Bertasius, J. Shi, and L. Torresani, “Semantic segmentation with boundary neural fields,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3602–3610, 2016.
- L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017.
- Y. Fang, C. Chen, Y. Yuan, and K.-y. Tong, “Selective feature aggregation network with area-boundary constraints for polyp segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part I 22, pp. 302–310, Springer, 2019.
- S. Chen, X. Tan, B. Wang, H. Lu, X. Hu, and Y. Fu, “Reverse attention-based residual network for salient object detection,” IEEE Transactions on Image Processing, vol. 29, pp. 3763–3776, 2020.
- H. Ma, H. Yang, and D. Huang, “Boundary guided context aggregation for semantic segmentation,” arXiv preprint arXiv:2110.14587, 2021.
- M. Kim, S. Woo, D. Kim, and I. S. Kweon, “The devil is in the boundary: Exploiting boundary representation for basis-based instance segmentation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 929–938, 2021.
- H. Chen, K. Sun, Z. Tian, C. Shen, Y. Huang, and Y. Yan, “Blendmask: Top-down meets bottom-up for instance segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8573–8581, 2020.
- A. Lou and M. Loew, “Cfpnet: channel-wise feature pyramid for real-time semantic segmentation,” in 2021 IEEE International Conference on Image Processing (ICIP), pp. 1894–1898, IEEE, 2021.
- S. Bhojanapalli, A. Chakrabarti, D. Glasner, D. Li, T. Unterthiner, and A. Veit, “Understanding robustness of transformers for image classification,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 10231–10241, 2021.
- W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 568–578, 2021.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
- J.-X. Zhao, J.-J. Liu, D.-P. Fan, Y. Cao, J. Yang, and M.-M. Cheng, “Egnet: Edge guidance network for salient object detection,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 8779–8788, 2019.
- Y. Dai, F. Gieseke, S. Oehmcke, Y. Wu, and K. Barnard, “Attentional feature fusion,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3560–3569, 2021.
- Q.-L. Zhang and Y.-B. Yang, “Sa-net: Shuffle attention for deep convolutional neural networks,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2235–2239, IEEE, 2021.
- B. Dong, M. Zhuge, Y. Wang, H. Bi, and G. Chen, “Accurate camouflaged object detection via mixture convolution and interactive fusion,” arXiv preprint arXiv:2101.05687, 2021.
- D. Vázquez, J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, A. M. López, A. Romero, M. Drozdzal, and A. Courville, “A benchmark for endoluminal scene segmentation of colonoscopy images,” Journal of healthcare engineering, vol. 2017, 2017.
- J. Silva, A. Histace, O. Romain, X. Dray, and B. Granado, “Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer,” International journal of computer assisted radiology and surgery, vol. 9, pp. 283–293, 2014.
- J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, D. Gil, C. Rodríguez, and F. Vilariño, “Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians,” Computerized medical imaging and graphics, vol. 43, pp. 99–111, 2015.
- N. Tajbakhsh, S. R. Gurudu, and J. Liang, “Automated polyp detection in colonoscopy videos using shape and context information,” IEEE transactions on medical imaging, vol. 35, no. 2, pp. 630–644, 2015.
- D. Jha, P. H. Smedsrud, M. A. Riegler, P. Halvorsen, T. de Lange, D. Johansen, and H. D. Johansen, “Kvasir-seg: A segmented polyp dataset,” in MultiMedia Modeling: 26th International Conference, MMM 2020, Daejeon, South Korea, January 5–8, 2020, Proceedings, Part II 26, pp. 451–462, Springer, 2020.
- F. Shen, X. He, M. Wei, and Y. Xie, “A competitive method to vipriors object detection challenge,” arXiv preprint arXiv:2104.09059, 2021.
- C.-H. Huang, H.-Y. Wu, and Y.-L. Lin, “Hardnet-mseg: A simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 fps,” arXiv preprint arXiv:2101.07172, 2021.
- I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017.
- A. Lou, S. Guan, and M. Loew, “Caranet: context axial reverse attention network for segmentation of small medical objects,” Journal of Medical Imaging, vol. 10, no. 1, p. 014005, 2023.