Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Out-of-distribution Detection in Medical Image Analysis: A survey (2404.18279v2)

Published 28 Apr 2024 in cs.CV

Abstract: Computer-aided diagnostics has benefited from the development of deep learning-based computer vision techniques in these years. Traditional supervised deep learning methods assume that the test sample is drawn from the identical distribution as the training data. However, it is possible to encounter out-of-distribution samples in real-world clinical scenarios, which may cause silent failure in deep learning-based medical image analysis tasks. Recently, research has explored various out-of-distribution (OOD) detection situations and techniques to enable a trustworthy medical AI system. In this survey, we systematically review the recent advances in OOD detection in medical image analysis. We first explore several factors that may cause a distributional shift when using a deep-learning-based model in clinic scenarios, with three different types of distributional shift well defined on top of these factors. Then a framework is suggested to categorize and feature existing solutions, while the previous studies are reviewed based on the methodology taxonomy. Our discussion also includes evaluation protocols and metrics, as well as the challenge and a research direction lack of exploration.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (112)
  1. J. Yang, K. Zhou, Y. Li, and Z. Liu, “Generalized out-of-distribution detection: A survey,” arXiv preprint arXiv:2110.11334, 2021.
  2. G. Pang, C. Shen, L. Cao, and A. V. D. Hengel, “Deep learning for anomaly detection: A review,” ACM computing surveys (CSUR), vol. 54, no. 2, pp. 1–38, 2021.
  3. M. E. Tschuchnig and M. Gadermayr, “Anomaly detection in medical imaging-a mini review,” in Data Science–Analytics and Applications: Proceedings of the 4th International Data Science Conference–iDSC2021.   Springer, 2022, pp. 33–38.
  4. T. Fernando, H. Gammulle, S. Denman, S. Sridharan, and C. Fookes, “Deep learning for medical anomaly detection–a survey,” ACM Computing Surveys (CSUR), vol. 54, no. 7, pp. 1–37, 2021.
  5. Y. Lu and P. Xu, “Anomaly detection for skin disease images using variational autoencoder,” arXiv preprint arXiv:1807.01349, 2018.
  6. J. Wyatt, A. Leach, S. M. Schmon, and C. G. Willcocks, “Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 650–656.
  7. A. R. Venkatakrishnan, S. T. Kim, R. Eisawy, F. Pfister, and N. Navab, “Self-supervised out-of-distribution detection in brain ct scans,” arXiv preprint arXiv:2011.05428, 2020.
  8. L. Gao and S. Wu, “Response score of deep learning for out-of-distribution sample detection of medical images,” Journal of biomedical informatics, vol. 107, p. 103442, 2020.
  9. B. Lambert, F. Forbes, S. Doyle, H. Dehaene, and M. Dojat, “Trustworthy clinical ai solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis,” Artificial Intelligence in Medicine, p. 102830, 2024.
  10. O. Zhang, J.-B. Delbrouck, and D. L. Rubin, “Out of distribution detection for medical images,” in Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis: 3rd International Workshop, UNSURE 2021, and 6th International Workshop, PIPPI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings 3.   Springer, 2021, pp. 102–111.
  11. C. Berger, M. Paschali, B. Glocker, and K. Kamnitsas, “Confidence-based out-of-distribution detection: a comparative study and analysis,” in Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis: 3rd International Workshop, UNSURE 2021, and 6th International Workshop, PIPPI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings 3.   Springer, 2021, pp. 122–132.
  12. T. Araújo, G. Aresta, U. Schmidt-Erfurth, and H. Bogunović, “Few-shot out-of-distribution detection for automated screening in retinal oct images using deep learning,” Scientific Reports, vol. 13, no. 1, p. 16231, 2023.
  13. J. Thagaard, S. Hauberg, B. van der Vegt, T. Ebstrup, J. D. Hansen, and A. B. Dahl, “Can you trust predictive uncertainty under real dataset shifts in digital pathology?” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I 23.   Springer, 2020, pp. 824–833.
  14. J. Linmans, J. van der Laak, and G. Litjens, “Efficient out-of-distribution detection in digital pathology using multi-head convolutional neural networks.” in MIDL, 2020, pp. 465–478.
  15. J. Linmans, S. Elfwing, J. van der Laak, and G. Litjens, “Predictive uncertainty estimation for out-of-distribution detection in digital pathology,” Medical Image Analysis, vol. 83, p. 102655, 2023.
  16. A. Mehrtash, W. M. Wells, C. M. Tempany, P. Abolmaesumi, and T. Kapur, “Confidence calibration and predictive uncertainty estimation for deep medical image segmentation,” IEEE transactions on medical imaging, vol. 39, no. 12, pp. 3868–3878, 2020.
  17. K. Zou, Z. Chen, X. Yuan, X. Shen, M. Wang, and H. Fu, “A review of uncertainty estimation and its application in medical imaging,” Meta-Radiology, p. 100003, 2023.
  18. M. Abdar, F. Pourpanah, S. Hussain, D. Rezazadegan, L. Liu, M. Ghavamzadeh, P. Fieguth, X. Cao, A. Khosravi, U. R. Acharya et al., “A review of uncertainty quantification in deep learning: Techniques, applications and challenges,” Information fusion, vol. 76, pp. 243–297, 2021.
  19. A. Malinin and M. Gales, “Predictive uncertainty estimation via prior networks,” Advances in neural information processing systems, vol. 31, 2018.
  20. J. Nandy, W. Hsu, and M. L. Lee, “Towards maximizing the representation gap between in-domain & out-of-distribution examples,” Advances in neural information processing systems, vol. 33, pp. 9239–9250, 2020.
  21. T. DeVries and G. W. Taylor, “Learning confidence for out-of-distribution detection in neural networks,” arXiv preprint arXiv:1802.04865, 2018.
  22. J. Van Amersfoort, L. Smith, Y. W. Teh, and Y. Gal, “Uncertainty estimation using a single deep deterministic neural network,” in International conference on machine learning.   PMLR, 2020, pp. 9690–9700.
  23. M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep learning to quantify classification uncertainty,” Advances in neural information processing systems, vol. 31, 2018.
  24. D. Ulmer and G. Cinà, “Know your limits: Uncertainty estimation with relu classifiers fails at reliable ood detection,” in Uncertainty in Artificial Intelligence.   PMLR, 2021, pp. 1766–1776.
  25. D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” arXiv preprint arXiv:1610.02136, 2016.
  26. Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international conference on machine learning.   PMLR, 2016, pp. 1050–1059.
  27. B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” Advances in neural information processing systems, vol. 30, 2017.
  28. C. Tomani, S. Gruber, M. E. Erdem, D. Cremers, and F. Buettner, “Post-hoc uncertainty calibration for domain drift scenarios,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 124–10 132.
  29. Y. Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. Dillon, B. Lakshminarayanan, and J. Snoek, “Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift,” Advances in neural information processing systems, vol. 32, 2019.
  30. M. Minderer, J. Djolonga, R. Romijnders, F. Hubis, X. Zhai, N. Houlsby, D. Tran, and M. Lucic, “Revisiting the calibration of modern neural networks,” Advances in Neural Information Processing Systems, vol. 34, pp. 15 682–15 694, 2021.
  31. X. Chen, X. Wang, K. Zhang, K.-M. Fung, T. C. Thai, K. Moore, R. S. Mannel, H. Liu, B. Zheng, and Y. Qiu, “Recent advances and clinical applications of deep learning in medical image analysis,” Medical Image Analysis, vol. 79, p. 102444, 2022.
  32. T. Li, W. Bo, C. Hu, H. Kang, H. Liu, K. Wang, and H. Fu, “Applications of deep learning in fundus images: A review,” Medical Image Analysis, vol. 69, p. 101971, 2021.
  33. M. E. Celebi, N. Codella, and A. Halpern, “Dermoscopy image analysis: overview and future directions,” IEEE journal of biomedical and health informatics, vol. 23, no. 2, pp. 474–478, 2019.
  34. H. A. Alturkistani, F. M. Tashkandi, and Z. M. Mohammedsaleh, “Histological stains: a literature review and case study,” Global journal of health science, vol. 8, no. 3, p. 72, 2016.
  35. B. E. Bouma, J. F. de Boer, D. Huang, I.-K. Jang, T. Yonetsu, C. L. Leggett, R. Leitgeb, D. D. Sampson, M. Suter, B. J. Vakoc et al., “Optical coherence tomography,” Nature Reviews Methods Primers, vol. 2, no. 1, p. 79, 2022.
  36. P. R. Patel and O. De Jesus, “Ct scan,” 2021.
  37. G. Katti, S. A. Ara, and A. Shireen, “Magnetic resonance imaging (mri)–a review,” International journal of dental clinics, vol. 3, no. 1, pp. 65–70, 2011.
  38. A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 427–436.
  39. S. Liang, Y. Li, and R. Srikant, “Enhancing the reliability of out-of-distribution image detection in neural networks,” arXiv preprint arXiv:1706.02690, 2017.
  40. K. Lee, K. Lee, H. Lee, and J. Shin, “A simple unified framework for detecting out-of-distribution samples and adversarial attacks,” Advances in neural information processing systems, vol. 31, 2018.
  41. D. Hendrycks, M. Mazeika, and T. Dietterich, “Deep anomaly detection with outlier exposure,” arXiv preprint arXiv:1812.04606, 2018.
  42. W. Liu, X. Wang, J. Owens, and Y. Li, “Energy-based out-of-distribution detection,” Advances in neural information processing systems, vol. 33, pp. 21 464–21 475, 2020.
  43. R. Huang, A. Geng, and Y. Li, “On the importance of gradients for detecting distributional shifts in the wild,” Advances in Neural Information Processing Systems, vol. 34, pp. 677–689, 2021.
  44. X. Du, Z. Wang, M. Cai, and Y. Li, “Vos: Learning what you don’t know by virtual outlier synthesis,” arXiv preprint arXiv:2202.01197, 2022.
  45. S. Fort, J. Ren, and B. Lakshminarayanan, “Exploring the limits of out-of-distribution detection,” Advances in Neural Information Processing Systems, vol. 34, pp. 7068–7081, 2021.
  46. R. Koner, P. Sinhamahapatra, K. Roscher, S. Günnemann, and V. Tresp, “Oodformer: Out-of-distribution detection transformer,” arXiv preprint arXiv:2107.08976, 2021.
  47. S. Esmaeilpour, B. Liu, E. Robertson, and L. Shu, “Zero-shot out-of-distribution detection based on the pre-trained model clip,” in Proceedings of the AAAI conference on artificial intelligence, vol. 36, no. 6, 2022, pp. 6568–6576.
  48. R. Huang and Y. Li, “Mos: Towards scaling out-of-distribution detection for large semantic space,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8710–8719.
  49. H. Chen, J. Cao, and M. Yi, “Out of distribution detection for medical images,” in International Conference on Computer Vision, Application, and Algorithm (CVAA 2022), vol. 12613.   SPIE, 2023, pp. 95–102.
  50. F. Ahmed and A. Courville, “Detecting semantic anomalies,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 3154–3162.
  51. T. Cao, C.-W. Huang, D. Y.-T. Hui, and J. P. Cohen, “A benchmark of medical out of distribution detection,” arXiv preprint arXiv:2007.04250, 2020.
  52. E. Calli, B. Van Ginneken, E. Sogancioglu, and K. Murphy, “Frodo: An in-depth analysis of a system to reject outlier samples from a trained neural network,” IEEE Transactions on Medical Imaging, vol. 42, no. 4, pp. 971–981, 2022.
  53. D. Karimi and A. Gholipour, “Improving calibration and out-of-distribution detection in deep models for medical image segmentation,” IEEE Transactions on Artificial Intelligence, vol. 4, no. 2, pp. 383–397, 2022.
  54. C. González, K. Gotkowski, M. Fuchs, A. Bucher, A. Dadras, R. Fischbach, I. J. Kaltenborn, and A. Mukhopadhyay, “Distance-based detection of out-of-distribution silent failures for covid-19 lung lesion segmentation,” Medical image analysis, vol. 82, p. 102596, 2022.
  55. M. S. Graham, W. H. Pinaya, P.-D. Tudosiu, P. Nachev, S. Ourselin, and J. Cardoso, “Denoising diffusion models for out-of-distribution detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 2947–2956.
  56. M. Tardy, B. Scheffer, and D. Mateus, “Uncertainty measurements for the reliable classification of mammograms,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2019, pp. 495–503.
  57. J. Nandy, W. Hs, and M. L. Le, “Distributional shifts in automated diabetic retinopathy screening,” in 2021 IEEE International Conference on Image Processing (ICIP).   IEEE, 2021, pp. 255–259.
  58. A. G. Pacheco, C. S. Sastry, T. Trappenberg, S. Oore, and R. A. Krohling, “On out-of-distribution detection algorithms with deep neural skin cancer classifiers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 732–733.
  59. Y. Yasin, D. J. Rumala, M. H. Purnomo, A. A. P. Ratna, A. N. Hidayati, I. Nurtanio, R. F. Rachmadi, and I. K. E. Purnama, “Open set deep networks based on extreme value theory (evt) for open set recognition in skin disease classification,” in 2020 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM).   IEEE, 2020, pp. 332–337.
  60. X. Li, C. Desrosiers, and X. Liu, “Deep neural forest for out-of-distribution detection of skin lesion images,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 1, pp. 157–165, 2022.
  61. A. G. Roy, J. Ren, S. Azizi, A. Loh, V. Natarajan, B. Mustafa, N. Pawlowski, J. Freyberg, Y. Liu, Z. Beaver et al., “Does your dermatology classifier know what it doesn’t know? detecting the long-tail of unseen conditions,” Medical Image Analysis, vol. 75, p. 102274, 2022.
  62. H. Kim, G. A. Tadesse, C. Cintas, S. Speakman, and K. Varshney, “Out-of-distribution detection in dermatology using input perturbation and subset scanning,” in 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI).   IEEE, 2022, pp. 1–4.
  63. M. Combalia, F. Hueto, S. Puig, J. Malvehy, and V. Vilaplana, “Uncertainty estimation in deep neural networks for dermoscopic image classification,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 744–745.
  64. M. S. Graham, W. H. L. Pinaya, P. Wright, P.-D. Tudosiu, Y. H. Mah, J. T. Teo, H. R. Jäger, D. Werring, P. Nachev, S. Ourselin et al., “Unsupervised 3d out-of-distribution detection with latent diffusion models,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2023, pp. 446–456.
  65. C. Gonzalez, K. Gotkowski, A. Bucher, R. Fischbach, I. Kaltenborn, and A. Mukhopadhyay, “Detecting when pre-trained nnu-net models fail silently for covid-19 lung lesion segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part VII 24.   Springer, 2021, pp. 304–314.
  66. M. S. Graham, P.-D. Tudosiu, P. Wright, W. H. L. Pinaya, U. Jean-Marie, Y. H. Mah, J. T. Teo, R. Jager, D. Werring, P. Nachev et al., “Transformer-based out-of-distribution detection for clinically safe segmentation,” in International Conference on Medical Imaging with Deep Learning.   PMLR, 2022, pp. 457–476.
  67. M. Woodland, N. Patel, M. Al Taie, J. P. Yung, T. J. Netherton, A. B. Patel, and K. K. Brock, “Dimensionality reduction for improving out-of-distribution detection in medical image segmentation,” in International Workshop on Uncertainty for Safe Utilization of Machine Learning in Medical Imaging.   Springer, 2023, pp. 147–156.
  68. H. Yang, C. Chen, Y. Chen, H. C. Yip, and D. QI, “Uncertainty estimation for safety-critical scene segmentation via fine-grained reward maximization,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  69. M. Hein, M. Andriushchenko, and J. Bitterwolf, “Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 41–50.
  70. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 586–595.
  71. A. Bendale and T. E. Boult, “Towards open set deep networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1563–1572.
  72. C. S. Sastry and S. Oore, “Detecting out-of-distribution examples with gram matrices,” in International Conference on Machine Learning.   PMLR, 2020, pp. 8491–8501.
  73. C. Cintas, S. Speakman, V. Akinwande, W. Ogallo, K. Weldemariam, S. Sridharan, and E. McFowland, “Detecting adversarial attacks via subset scanning of autoencoder activations and reconstruction error,” in Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, 2021, pp. 876–882.
  74. C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” in International conference on machine learning.   PMLR, 2017, pp. 1321–1330.
  75. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  76. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
  77. P. Van Molle, T. Verbelen, C. De Boom, B. Vankeirsbilck, J. De Vylder, B. Diricx, T. Kimpe, P. Simoens, and B. Dhoedt, “Quantifying uncertainty of deep neural networks in skin lesion classification,” in Uncertainty for Safe Utilization of Machine Learning in Medical Imaging and Clinical Image-Based Procedures: First International Workshop, UNSURE 2019, and 8th International Workshop, CLIP 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Proceedings 8.   Springer, 2019, pp. 52–61.
  78. A. Jøsang and A. Jøsang, “Principles of subjective logic,” Subjective Logic: A Formalism for Reasoning Under Uncertainty, pp. 83–94, 2016.
  79. A. Malinin and M. Gales, “Reverse kl-divergence training of prior networks: Improved uncertainty and adversarial robustness,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  80. K. Lee, H. Lee, K. Lee, and J. Shin, “Training confidence-calibrated classifiers for detecting out-of-distribution samples,” arXiv preprint arXiv:1711.09325, 2017.
  81. S. Thulasidasan, S. Thapa, S. Dhaubhadel, G. Chennupati, T. Bhattacharya, and J. Bilmes, “A simple and effective baseline for out-of-distribution detection using abstention,” 2020.
  82. O. Lyudchik, “Outlier detection using autoencoders,” Tech. Rep., 2016.
  83. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” nature, vol. 323, no. 6088, pp. 533–536, 1986.
  84. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006.
  85. J. An and S. Cho, “Variational autoencoder based anomaly detection using reconstruction probability,” Special lecture on IE, vol. 2, no. 1, pp. 1–18, 2015.
  86. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
  87. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
  88. L. Liu, Y. Ren, Z. Lin, and Z. Zhao, “Pseudo numerical methods for diffusion models on manifolds,” arXiv preprint arXiv:2202.09778, 2022.
  89. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18.   Springer, 2015, pp. 234–241.
  90. F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,” Nature methods, vol. 18, no. 2, pp. 203–211, 2021.
  91. A. Hatamizadeh, V. Nath, Y. Tang, D. Yang, H. R. Roth, and D. Xu, “Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images,” in International MICCAI Brainlesion Workshop.   Springer, 2021, pp. 272–284.
  92. L. McInnes, J. Healy, and J. Melville, “Umap: Uniform manifold approximation and projection for dimension reduction,” arXiv preprint arXiv:1802.03426, 2018.
  93. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.
  94. D. Hendrycks, M. Mazeika, S. Kadavath, and D. Song, “Using self-supervised learning can improve model robustness and uncertainty,” Advances in neural information processing systems, vol. 32, 2019.
  95. F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 fourth international conference on 3D vision (3DV).   Ieee, 2016, pp. 565–571.
  96. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
  97. P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 12 873–12 883.
  98. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  99. K. Zepf, S. Wanna, M. Miani, J. Moore, J. Frellsen, S. Hauberg, A. Feragen, and F. Warburg, “Laplacian segmentation networks: Improved epistemic uncertainty from spatial aleatoric uncertainty,” arXiv preprint arXiv:2303.13123, 2023.
  100. J. Lanchantin, T. Wang, V. Ordonez, and Y. Qi, “General multi-label image classification with transformers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 478–16 488.
  101. S. Liu, L. Zhang, X. Yang, H. Su, and J. Zhu, “Query2label: A simple transformer way to multi-label classification,” arXiv preprint arXiv:2107.10834, 2021.
  102. R. You, Z. Guo, L. Cui, X. Long, Y. Bao, and S. Wen, “Cross-modality attention with semantic graph embedding for multi-label classification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 12 709–12 716.
  103. D. Zhang and B. Taneva-Popova, “A theoretical analysis of out-of-distribution detection in multi-label classification,” in Proceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval, 2023, pp. 275–282.
  104. J. Lin, Q. Cai, and M. Lin, “Multi-label classification of fundus images with graph convolutional network and self-supervised learning,” IEEE Signal Processing Letters, vol. 28, pp. 454–458, 2021.
  105. B. Chen, J. Li, G. Lu, H. Yu, and D. Zhang, “Label co-occurrence learning with graph convolutional networks for multi-label chest x-ray image classification,” IEEE journal of biomedical and health informatics, vol. 24, no. 8, pp. 2292–2302, 2020.
  106. S. Basart, M. Mantas, M. Mohammadreza, S. Jacob, and S. Dawn, “Scaling out-of-distribution detection for real-world settings,” in International Conference on Machine Learning, 2022.
  107. H. Wang, W. Liu, A. Bocchieri, and Y. Li, “Can multi-label classification networks know what they don’t know?” Advances in Neural Information Processing Systems, vol. 34, pp. 29 074–29 087, 2021.
  108. L. Wang, S. Huang, L. Huangfu, B. Liu, and X. Zhang, “Multi-label out-of-distribution detection via exploiting sparsity and co-occurrence of labels,” Image and Vision Computing, vol. 126, p. 104548, 2022.
  109. Y. Zhu, K. M. Ting, and Z.-H. Zhou, “Multi-label learning with emerging new labels,” IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 10, pp. 1901–1914, 2018.
  110. Y. Zhang, Y. Wang, X.-Y. Liu, S. Mi, and M.-L. Zhang, “Large-scale multi-label classification using unknown streaming images,” Pattern Recognition, vol. 99, p. 107100, 2020.
  111. S. Shi, I. Malhi, K. Tran, A. Y. Ng, and P. Rajpurkar, “Chexseen: Unseen disease detection for deep learning interpretation of chest x-rays,” arXiv preprint arXiv:2103.04590, 2021.
  112. A. Wollek, T. Willem, M. Ingrisch, B. Sabel, and T. Lasser, “Out-of-distribution detection with in-distribution voting using the medical example of chest x-ray classification,” Medical Physics, 2023.
Citations (1)

Summary

We haven't generated a summary for this paper yet.