Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Eye-Tracking Meets Machine Learning: A Systematic Review on Applications in Medical Image Analysis (2403.07834v1)

Published 12 Mar 2024 in eess.IV and cs.CV

Abstract: Eye-gaze tracking research offers significant promise in enhancing various healthcare-related tasks, above all in medical image analysis and interpretation. Eye tracking, a technology that monitors and records the movement of the eyes, provides valuable insights into human visual attention patterns. This technology can transform how healthcare professionals and medical specialists engage with and analyze diagnostic images, offering a more insightful and efficient approach to medical diagnostics. Hence, extracting meaningful features and insights from medical images by leveraging eye-gaze data improves our understanding of how radiologists and other medical experts monitor, interpret, and understand images for diagnostic purposes. Eye-tracking data, with intricate human visual attention patterns embedded, provides a bridge to integrating AI development and human cognition. This integration allows novel methods to incorporate domain knowledge into ML and deep learning (DL) approaches to enhance their alignment with human-like perception and decision-making. Moreover, extensive collections of eye-tracking data have also enabled novel ML/DL methods to analyze human visual patterns, paving the way to a better understanding of human vision, attention, and cognition. This systematic review investigates eye-gaze tracking applications and methodologies for enhancing ML/DL algorithms for medical image analysis in depth.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Using multi-modal data for improving generalizability and explainability of disease classification in radiology. arXiv preprint arXiv:2207.14781, 2022.
  2. Extracting decision-making features from the unstructured eye movements of clinicians on glaucoma oct reports and developing ai models to classify expertise. Frontiers in Medicine, 10, 2023.
  3. C. Antunes and M. Silveira. Generating attention maps from eye-gaze for the diagnosis of alzheimer’s disease. In Annual Conference on Neural Information Processing Systems, pages 3–19. PMLR, 2023.
  4. Gazeradar: A gaze and radiomics-guided disease localization framework. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 686–696. Springer, 2022.
  5. Radiotransformer: a cascaded global-focal transformer for visual attention–guided disease classification. In European Conference on Computer Vision, pages 679–698. Springer, 2022.
  6. Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation. Frontiers in Radiology, 3:1088068, 2023.
  7. Reflacx, a dataset of reports and eye-tracking data for localization of abnormalities in chest x-rays. Scientific data, 9(1):350, 2022.
  8. Thinking like sonographers: A deep cnn model for diagnosing gout from musculoskeletal ultrasound. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 159–168. Springer, 2023.
  9. Finding lung nodules with and without comparative visual scanning. Perception & psychophysics, 29(6):594–598, 1981.
  10. Deep semantic gaze embedding and scanpath comparison for expertise classification during opt viewing. In ACM symposium on eye tracking research and applications, pages 1–10, 2020.
  11. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  12. C. Hsieh. Human-centred multimodal deep learning models for chest x-ray diagnosis. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, pages 7085–7086, 2023.
  13. Mimic-eye: Integrating mimic datasets with reflacx and eye gaze for multimodal deep learning applications. PhysioNet (version 1.0. 0), 2023.
  14. Mammo-net: Integrating gaze supervision and interactive information in multi-view mammogram classification. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 68–78. Springer, 2023.
  15. Creation and validation of a chest x-ray dataset with eye-tracking and report dictation for ai development. Scientific data, 8(1):92, 2021.
  16. A collaborative computer aided diagnosis (c-cad) system with eye-tracking, sparse attentional model, and deep learning. Medical image analysis, 51:101–115, 2019.
  17. Do you see what i see? a comparison of radiologist eye gaze to computer vision saliency maps for chest x-ray classification. ICML, 2021.
  18. Visual scanning, pattern recognition and decision-making in pulmonary nodule detection. Investigative radiology, 13(3):175–181, 1978.
  19. Integrating eye-gaze data into cxr dl approaches: A preliminary study. In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pages 196–199. IEEE, 2023.
  20. Eye-gaze-guided vision transformer for rectifying shortcut learning. IEEE Transactions on Medical Imaging, 2023.
  21. On smart gaze based annotation of histopathology images for training of deep convolutional neural networks. IEEE Journal of Biomedical and Health Informatics, 26(7):3025–3036, 2022.
  22. Discrimination of radiologists utilizing eye-tracking technology and machine learning: A case study. arXiv preprint arXiv:2308.02748, 2023.
  23. Integrating eye gaze into machine learning using fractal curves. In Annual Conference on Neural Information Processing Systems, pages 113–126. PMLR, 2023.
  24. D. Noton and L. Stark. Scanpaths in eye movements during pattern perception. Science, 171(3968):308–311, 1971.
  25. Artificial intelligence for the analysis of workload-related changes in radiologists’ gaze patterns. IEEE Journal of Biomedical and Health Informatics, 26(9):4541–4550, 2022.
  26. Changes in radiologists’ gaze patterns against lung x-rays with different abnormalities: a randomized experiment. Journal of Digital Imaging, 36(3):767–775, 2023.
  27. Decoding radiologists intense focus for accurate cxr diagnoses: A controllable and interpretable ai system. arXiv preprint arXiv:2309.13550, 2023.
  28. Human attention in fine-grained classification. arXiv preprint arXiv:2111.01628, 2021.
  29. Integrating eye tracking and speech recognition accurately annotates mr brain images for deep learning: proof of principle. Radiology: Artificial Intelligence, 3(1):e200047, 2020.
  30. Eye tracking for deep learning segmentation using convolutional neural networks. Journal of digital imaging, 32:597–604, 2019.
  31. Skill, or style? classification of fetal sonography eye-tracking data. In Annual Conference on Neural Information Processing Systems, pages 184–198. PMLR, 2023.
  32. Skill characterisation of sonographer gaze patterns during second trimester clinical fetal ultrasounds using time curves. In 2022 Symposium on Eye Tracking Research and Applications, pages 1–7, 2022.
  33. Gazegnn: A gaze-guided graph neural network for disease classification. arXiv preprint arXiv:2305.18221, 2023.
  34. Eye-guided dual-path network for multi-organ segmentation of abdomen. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 23–32. Springer, 2023.
  35. Follow my eye: Using gaze to supervise computer-aided diagnosis. IEEE Transactions on Medical Imaging, 41(7):1688–1698, 2022.
  36. Learning better contrastive view from radiologist’s gaze. arXiv preprint arXiv:2305.08826, 2023.
  37. Improving disease classification performance and explainability of deep learning models in radiology with heatmap generators. Frontiers in Radiology, 2:991683, 2022.
  38. Jointly boosting saliency prediction and disease classification on chest x-ray images with multi-task unet. In Annual Conference on Medical Image Understanding and Analysis, pages 594–608. Springer, 2022.
  39. Gaze-guided class activation mapping: Leverage human visual attention for network attention in chest x-rays classification. In Proceedings of the 15th International Symposium on Visual Information Communication and Interaction, pages 1–8, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sahar Moradizeyveh (2 papers)
  2. Mehnaz Tabassum (3 papers)
  3. Sidong Liu (14 papers)
  4. Robert Ahadizad Newport (1 paper)
  5. Amin Beheshti (31 papers)
  6. Antonio Di Ieva (7 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.