Swin transformers are robust to distribution and concept drift in endoscopy-based longitudinal rectal cancer assessment (2405.03762v4)
Abstract: Endoscopic images are used at various stages of rectal cancer treatment starting from cancer screening, diagnosis, during treatment to assess response and toxicity from treatments such as colitis, and at follow up to detect new tumor or local regrowth (LR). However, subjective assessment is highly variable and can underestimate the degree of response in some patients, subjecting them to unnecessary surgery, or overestimate response that places patients at risk of disease spread. Advances in deep learning has shown the ability to produce consistent and objective response assessment for endoscopic images. However, methods for detecting cancers, regrowth, and monitoring response during the entire course of patient treatment and follow-up are lacking. This is because, automated diagnosis and rectal cancer response assessment requires methods that are robust to inherent imaging illumination variations and confounding conditions (blood, scope, blurring) present in endoscopy images as well as changes to the normal lumen and tumor during treatment. Hence, a hierarchical shifted window (Swin) transformer was trained to distinguish rectal cancer from normal lumen using endoscopy images. Swin as well as two convolutional (ResNet-50, WideResNet-50), and vision transformer (ViT) models were trained and evaluated on follow-up longitudinal images to detect LR on private dataset as well as on out-of-distribution (OOD) public colonoscopy datasets to detect pre/non-cancerous polyps. Color shifts were applied using optimal transport to simulate distribution shifts. Swin and ResNet models were similarly accurate in the in-distribution dataset. Swin was more accurate than other methods (follow-up: 0.84, OOD: 0.83) even when subject to color shifts (follow-up: 0.83, OOD: 0.87), indicating capability to provide robust performance for longitudinal cancer assessment.
- S. Felder, S. Patil, E. Kennedy, and J. Garcia-Aguilar, “Endoscopic feature and response reproducibility in tumor assessment after neoadjuvant therapy for rectal adenocarcinoma,” Ann Surg Oncol, vol. 28, no. 9, pp. 5205–5223, 2021.
- S. Ali, M. Dmitrieva, N. Ghatwary, S. Bano, G. Polat, A. Temizel, A. Krenzer, A. Hekalo, Y. B. Guo, B. Matuszewski, M. Gridach, I. Voiculescu, V. Yoganand, A. Chavan, A. Raj, N. T. Nguyen, D. Q. Tran, L. D. Huynh, N. Boutry, S. Rezvy, H. Chen, Y. H. Choi, A. Subramanian, V. Balasubramanian, X. W. Gao, H. Hu, Y. Liao, D. Stoyanov, C. Daul, S. Realdon, R. Cannizzaro, D. Lamarque, T. Tran-Nguyen, A. Bailey, B. Braden, J. E. East, and J. Rittscher, “Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy,” Medical Image Analysis, vol. 70, p. 102002, 2021.
- H. Thompson, J. Kim, R. Jimenez-Rodriguez, J. Garcia-Aguilar, and H. Veeraraghavan, “Deep learning-based model for identifying tumors in endoscopic images from patients with locally advanced rectal cancer treated with total neoadjuvant therapy,” Dis Colon Rectum, vol. 66, no. 3, pp. 383–391, 2023.
- M. Yamada, Y. Saito, H. Imaoka, M. Saiko, S. Yamada, H. Kondo, H. Takamaru, T. Sakamoto, J. Sese, A. Kuchiba, T. Shibata, and R. Hamamoto, “Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy,” Sci Rep, no. 14465, 2019.
- M. Turan and F. Durmus, “UC-NfNet: Deep learning-enabled assessment of ulcerative colitis from colonoscopy images,” Med Image Anal, vol. 82, p. 102587, 2022.
- Z. Dong, J. Wang, Y. Li, Y. Deng, W. Zhou, X. Zeng, D. Gong, J. Liu, J. Pan, R. Shang, Y. Xu, M. Xu, L. Zhang, M. Zhang, X. Tao, Y. Zhu, H. Du, Z. Lu, L. Yao, L. Wu, and H. Yu, “Explainable artificial intelligence incorporated with domain knowledge diagnosing early gastric neoplasms under white light endoscopy,” NPJ Digit Med, vol. 6, no. 1, p. 64, 2023.
- D.-P. Fan, G.-P. Ji, T. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao, “Pranet: Parallel reverse attention network for polyp segmentation,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, 2020, pp. 263–273.
- B. Dong, W. Wang, D.-P. Fan, J. Li, H. Fu, and L. Shao, “Polyp-pvt: Polyp segmentation with pyramid vision transformers,” CAAI Artificial Intelligence Research, vol. 2, p. 9150015, 2023.
- N. Duc, N. Oanh, N. Thuy, T. Triet, and V. Dinh, “Colonformer: An efficient transformer based method for colon polyp segmentation,” IEEE Access, vol. 10, pp. 80 575–80 586, 2022.
- J. Solomon, F. de Goes, G. Peyré, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas, “Convolutional wasserstein distances: Efficient optimal transportation on geometric domains,” ACM Trans. Graph., vol. 34, no. 4, 2015.
- A. Subramaniam, F. Orge, M. Douglass, B. Can, G. Monteoliva, E. Fried, V. Schbib, G. Saidman, B. Peña, S. Ulacia, P. Acevedo, A. Rollins, and W. D.L, “Image harmonization and deep learning automated classification of plus disease in retinopathy of prematurity,” J Med Imaging (Bellingham), vol. 10, no. 6, p. 061107, 2023.
- S. Ramesh, V. Srivastav, D. Alapatt, T. Yu, A. Murali, L. Sestini, C. I. Nwoye, I. Hamoud, S. Sharma, A. Fleurentin, G. Exarchakis, A. Karargyris, and N. Padoy, “Dissecting self-supervised learning methods for surgical computer vision,” 2023.
- P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He, “Accurate, large minibatch sgd: Training imagenet in 1 hour,” 2018.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015.
- S. Zagoruyko and N. Komodakis, “Wide residual networks,” in Proceedings of the British Machine Vision Conference 2016, BMVC 2016, York, UK, September 19-22, 2016. BMVA Press, 2016.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
- L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.