Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tackling the Incomplete Annotation Issue in Universal Lesion Detection Task By Exploratory Training (2309.13306v1)

Published 23 Sep 2023 in cs.CV

Abstract: Universal lesion detection has great value for clinical practice as it aims to detect various types of lesions in multiple organs on medical images. Deep learning methods have shown promising results, but demanding large volumes of annotated data for training. However, annotating medical images is costly and requires specialized knowledge. The diverse forms and contrasts of objects in medical images make fully annotation even more challenging, resulting in incomplete annotations. Directly training ULD detectors on such datasets can yield suboptimal results. Pseudo-label-based methods examine the training data and mine unlabelled objects for retraining, which have shown to be effective to tackle this issue. Presently, top-performing methods rely on a dynamic label-mining mechanism, operating at the mini-batch level. However, the model's performance varies at different iterations, leading to inconsistencies in the quality of the mined labels and limits their performance enhancement. Inspired by the observation that deep models learn concepts with increasing complexity, we introduce an innovative exploratory training to assess the reliability of mined lesions over time. Specifically, we introduce a teacher-student detection model as basis, where the teacher's predictions are combined with incomplete annotations to train the student. Additionally, we design a prediction bank to record high-confidence predictions. Each sample is trained several times, allowing us to get a sequence of records for each sample. If a prediction consistently appears in the record sequence, it is likely to be a true object, otherwise it may just a noise. This serves as a crucial criterion for selecting reliable mined lesions for retraining. Our experimental results substantiate that the proposed framework surpasses state-of-the-art methods on two medical image datasets, demonstrating its superior performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. A. Arnaud, F. Forbes, N. Coquery, N. Collomb, B. Lemasson, and E. L. Barbier, “Fully automatic lesion localization and characterization: Application to brain tumors using multiparametric quantitative mri data,” IEEE transactions on medical imaging, vol. 37, no. 7, pp. 1678–1689, 2018.
  2. K. Yan, M. Bagheri, and R. M. Summers, “3d context enhanced region-based convolutional neural network for end-to-end lesion detection,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2018, pp. 511–519.
  3. K. Yan, X. Wang, L. Lu, and R. M. Summers, “Deeplesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning,” Journal of medical imaging, vol. 5, no. 3, p. 036501, 2018.
  4. K. Yan, Y. Tang, Y. Peng, V. Sandfort, M. Bagheri, Z. Lu, and R. M. Summers, “Mulan: Multitask universal lesion analysis network for joint lesion detection, tagging, and segmentation,” in MICCAI.   Springer, 2019, pp. 194–202.
  5. S. Zhang, J. Xu, Y.-C. Chen, J. Ma, Z. Li, Y. Wang, and Y. Yu, “Revisiting 3d context modeling with supervised pre-training for universal lesion detection in ct slices,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2020, pp. 542–551.
  6. J. Yang, Y. He, K. Kuang, Z. Lin, H. Pfister, and B. Ni, “Asymmetric 3d context fusion for universal lesion detection,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2021, pp. 571–580.
  7. H. Li, L. Chen, H. Han, and S. Kevin Zhou, “Satr: Slice attention with transformer for universal lesion detection,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2022, pp. 163–174.
  8. M. Zlocha, Q. Dou, and B. Glocker, “Improving retinanet for ct lesion detection with dense masks from weak recist labels,” in International conference on medical image computing and computer-assisted intervention.   Springer, 2019, pp. 402–410.
  9. E. A. Eisenhauer, P. Therasse, J. Bogaerts, L. H. Schwartz, D. Sargent, R. Ford, J. Dancey, S. Arbuck, S. Gwyther, M. Mooney et al., “New response evaluation criteria in solid tumours: revised recist guideline (version 1.1),” European journal of cancer, vol. 45, no. 2, pp. 228–247, 2009.
  10. K. Yan, J. Cai, Y. Zheng, A. P. Harrison, D. Jin, Y.-b. Tang, Y.-X. Tang, L. Huang, J. Xiao, and L. Lu, “Learning from multiple datasets with heterogeneous and partial labels for universal lesion detection in ct,” IEEE Transactions on Medical Imaging, 2020.
  11. J. Cai, A. P. Harrison, Y. Zheng, K. Yan, Y. Huo, J. Xiao, L. Yang, and L. Lu, “Lesion-harvester: Iteratively mining unlabeled lesions and hard-negative examples at scale,” IEEE Transactions on Medical Imaging, vol. 40, no. 1, pp. 59–70, 2020.
  12. A. Shrivastava, A. Gupta, and R. Girshick, “Training region-based object detectors with online hard example mining,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 761–769.
  13. Z. Wu, N. Bodla, B. Singh, M. Najibi, R. Chellappa, and L. S. Davis, “Soft sampling for robust object detection,” arXiv preprint arXiv:1806.06986, 2018.
  14. Y. Yang, K. J. Liang, and L. Carin, “Object detection as a positive-unlabeled problem,” arXiv:2002.04672, 2020.
  15. Z. Zhao, F. Pang, Z. Liu, and C. Ye, “Positive-unlabeled learning for cell detection in histopathology images with incomplete annotations,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2021, pp. 509–518.
  16. H. Ying, Q. Song, J. Chen, T. Liang, J. Gu, F. Zhuang, D. Z. Chen, and J. Wu, “A semi-supervised deep convolutional framework for signet ring cell detection,” Neurocomputing, vol. 453, pp. 347–356, 2021.
  17. T. Wang, T. Yang, J. Cao, and X. Zhang, “Co-mining: Self-supervised learning for sparsely annotated object detection,” in AAAI, vol. 35, no. 4, 2021, pp. 2800–2808.
  18. H. Zhang, F. Chen, Z. Shen, Q. Hao, C. Zhu, and M. Savvides, “Solving missing-annotation object detection with background recalibration loss,” in ICASSP.   IEEE, 2020, pp. 1888–1892.
  19. T. Lin, Y. Guo, C. Yang, J. Yang, and Y. Xu, “Decoupled gradient harmonized detector for partial annotation: application to signet ring cell detection,” Neurocomputing, vol. 453, pp. 337–346, 2021.
  20. F. Lyu, B. Yang, A. J. Ma, and P. C. Yuen, “A segmentation-assisted model for universal lesion detection with partial labels,” in MICCAI.   Springer, 2021, pp. 117–127.
  21. D. Arpit, S. Jastrzłbski, N. Ballas, D. Krueger, E. Bengio, M. S. Kanwal, T. Maharaj, A. Fischer, A. Courville, Y. Bengio et al., “A closer look at memorization in deep networks,” in International conference on machine learning.   PMLR, 2017, pp. 233–242.
  22. D. Kalimeris, G. Kaplun, P. Nakkiran, B. Edelman, T. Yang, B. Barak, and H. Zhang, “Sgd on neural networks learns functions of increasing complexity,” Advances in neural information processing systems, vol. 32, 2019.
  23. Q. Tao, Z. Ge, J. Cai, J. Yin, and S. See, “Improving deep lesion detection using 3d contextual and spatial attention,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2019, pp. 185–193.
  24. Z. Wang, Z. Li, S. Zhang, J. Zhang, and K. Huang, “Semi-supervised lesion detection with reliable label propagation and missing label mining,” in Chinese Conference on Pattern Recognition and Computer Vision (PRCV).   Springer, 2019, pp. 291–302.
  25. Y.-C. Liu, C.-Y. Ma, Z. He, C.-W. Kuo, K. Chen, P. Zhang, B. Wu, Z. Kira, and P. Vajda, “Unbiased teacher for semi-supervised object detection,” in ICLR, 2020.
  26. F. Cermelli, A. Geraci, D. Fontanel, and B. Caputo, “Modeling missing annotations for incremental learning in object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3700–3710.
  27. M. Kato, T. Teshima, and J. Honda, “Learning from positive and unlabeled data with a selection bias,” in International conference on learning representations, 2018.
  28. P. Bilic, P. Christ, H. B. Li, E. Vorontsov, A. Ben-Cohen, G. Kaissis, A. Szeskin, C. Jacobs, G. E. H. Mamani, G. Chartrand et al., “The liver tumor segmentation benchmark (lits),” Medical Image Analysis, vol. 84, p. 102680, 2023.
  29. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.
  30. Y. Wu, Y. Chen, L. Yuan, Z. Liu, L. Wang, H. Li, and Y. Fu, “Rethinking classification and localization for object detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 186–10 195.

Summary

We haven't generated a summary for this paper yet.