Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision (2401.10815v1)

Published 19 Jan 2024 in cs.CV
RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision

Abstract: Language-supervised pre-training has proven to be a valuable method for extracting semantically meaningful features from images, serving as a foundational element in multimodal systems within the computer vision and medical imaging domains. However, resulting features are limited by the information contained within the text. This is particularly problematic in medical imaging, where radiologists' written findings focus on specific observations; a challenge compounded by the scarcity of paired imaging-text data due to concerns over leakage of personal health information. In this work, we fundamentally challenge the prevailing reliance on language supervision for learning general purpose biomedical imaging encoders. We introduce RAD-DINO, a biomedical image encoder pre-trained solely on unimodal biomedical imaging data that obtains similar or greater performance than state-of-the-art biomedical language supervised models on a diverse range of benchmarks. Specifically, the quality of learned representations is evaluated on standard imaging tasks (classification and semantic segmentation), and a vision-language alignment task (text report generation from images). To further demonstrate the drawback of language supervision, we show that features from RAD-DINO correlate with other medical records (e.g., sex or age) better than language-supervised models, which are generally not mentioned in radiology reports. Finally, we conduct a series of ablations determining the factors in RAD-DINO's performance; notably, we observe that RAD-DINO's downstream performance scales well with the quantity and diversity of training data, demonstrating that image-only supervision is a scalable approach for training a foundational biomedical image encoder.

Introduction to \raddino

The ongoing evolution in the AI field continues to showcase significant improvements in the use of deep learning models, particularly in sectors such as medical imaging. A common approach is to train these models using language-supervised pre-training, which involves using text to teach AI systems how to understand and classify images. While this has had considerable success, it also presents challenges, especially when detailed textual data is unavailable or when personal health information must be protected. Here, we introduce and evaluate \raddino, a new biomedical image encoder that breaks away from the norm by using unimodal biomedical imaging data for pre-training.

Beyond Text Supervision

\raddino challenges the traditional reliance on language supervision in the biomedical imaging domain. It presents an alternative approach where medical images are used to train an AI model without the accompanying text data. In assessments on various medical imaging tasks, including classification, semantic segmentation, and vision–language alignment, \raddino was found to perform similarly or better than existing language-supervised models.

Interestingly, \raddino also showcased an enhanced ability to correlate its features with additional medical records that are generally overlooked in radiology reports. This suggests that \raddino can potentially offer a broader and more holistic understanding of the clinical imagery compared to its text-supervised counterparts.

A Deeper Analysis

The researchers conducted comprehensive ablation studies to determine the factors contributing to \raddino’s impressive performance. These studies were essential to understand how the image encoder responds to various elements such as pre-training weights from general datasets, the role of masked image modeling, and the impact of image resolution.

Their results established that the beneficial domain-transfer from general image datasets laid a solid foundation for \raddino’s success. They also revealed that masked image modeling is particularly significant for image segmentation, demonstrating the importance of high-quality, domain-specific training data.

Benchmarking \raddino

\raddino’s effectiveness was benchmarked against a series of state-of-the-art models across multiple medical datasets. From image classification to the more complex application of generating text reports from medical images, \raddino held its own. In terms of correlation with patient metadata such as age and sex, which are typically not detailed in text reports, \raddino excelled. This marks an exciting step towards developing AI systems that can generalize better to a variety of real-world medical imaging applications.

Conclusion and Future Implications

The findings suggest a paradigm shift in the way foundational biomedical image encoders can be trained. By leveraging the vast amounts of imaging data while bypassing the restrictions of language supervision, \raddino opens up possibilities for medical AI applications that are more versatile, scalable, and perhaps more attuned to the nuanced needs of healthcare diagnostics. This paper serves as a compelling argument for the AI community to explore self-supervised learning avenues further, particularly in the crucial field of medical imaging.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (123)
  1. Virtex: Learning visual representations from textual annotations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11162–11173, 2021.
  2. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  3. Coca: Contrastive captioners are image-text foundation models. Trans. Mach. Learn. Res., 2022, 2022a. URL https://openreview.net/pdf?id=Ee277P3AYC.
  4. Making the most of text semantics to improve biomedical vision–language processing. In European conference on computer vision, pages 1–21. Springer, 2022.
  5. Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3942–3951, 2021.
  6. Contrastive learning of medical visual representations from paired images and text. In Machine Learning for Healthcare Conference, pages 2–25. PMLR, 2022.
  7. Advancing radiograph representation learning with masked record modeling. In The Eleventh International Conference on Learning Representations, 2023.
  8. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. arXiv preprint arXiv:2306.00890, 2023. URL https://arxiv.org/pdf/2306.00890.pdf.
  9. Med-flamingo: a multimodal medical few-shot learner. arXiv preprint arXiv:2307.15189, 2023a. URL https://arxiv.org/pdf/2307.15189.pdf.
  10. Towards generalist biomedical ai. arXiv preprint arXiv:2307.14334, 2023. URL https://arxiv.org/pdf/2307.14334.pdf.
  11. Foundation models for generalist medical artificial intelligence. Nature, 616(7956):259–265, 2023b.
  12. Scaling vision transformers to 22 billion parameters. In International Conference on Machine Learning, pages 7480–7512. PMLR, 2023.
  13. Scaling vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12104–12113, 2022a.
  14. Vision-language pre-training with triple contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15671–15680, 2022.
  15. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18123–18133, 2022b.
  16. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. Advances in Neural Information Processing Systems, 35:17612–17625, 2022.
  17. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data, 6(1):317, 2019. URL https://physionet.org/content/mimic-cxr/2.0.0/.
  18. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. URL https://arxiv.org/pdf/1807.03748.pdf.
  19. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020. URL https://proceedings.mlr.press/v119/chen20j/chen20j.pdf.
  20. Multimodal biomedical ai. Nature Medicine, 28(9):1773–1784, 2022.
  21. Curtis P Langlotz. The future of ai and informatics in radiology: 10 predictions, 2023.
  22. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. URL https://arxiv.org/pdf/2304.07193.pdf.
  23. Contrastive masked autoencoders are stronger vision learners. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  24. What do self-supervised vision transformers learn? In The Eleventh International Conference on Learning Representations, 2023.
  25. Objectives matter: Understanding the impact of self-supervised objectives on vision transformer representations. In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2023.
  26. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
  27. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022, 2021.
  28. Exploring plain vision transformer backbones for object detection. In European Conference on Computer Vision, pages 280–296. Springer, 2022.
  29. Unified perceptual parsing for scene understanding. In Proceedings of the European conference on computer vision (ECCV), pages 418–434, 2018. URL https://openaccess.thecvf.com/content_ECCV_2018/papers/Tete_Xiao_Unified_Perceptual_Parsing_ECCV_2018_paper.pdf.
  30. Are natural domain foundation models useful for medical image classification? In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 7634–7643, 2024.
  31. Med-unic: Unifying cross-lingual medical vision-language pre-training by diminishing bias. Advances in Neural Information Processing Systems, 2023.
  32. Models genesis: Generic autodidactic models for 3d medical image analysis. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part IV 22, pages 384–393. Springer, 2019.
  33. Self-supervised pre-training of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20730–20740, 2022.
  34. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 30, 2017.
  35. Beit: Bert pre-training of image transformers. In International Conference on Learning Representations, 2021.
  36. On the importance of asymmetry for siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16570–16579, 2022a.
  37. Spreading vectors for similarity search. In International Conference on Learning Representations, 2018.
  38. Self-evolving vision transformer for chest x-ray diagnosis through knowledge distillation. Nature communications, 13(1):3848, 2022.
  39. Learning to exploit temporal structure for biomedical vision-language processing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15016–15027, 2023.
  40. Large-scale domain-specific pretraining for biomedical vision-language processing. arXiv preprint arXiv:2303.00915, 2023a. URL https://arxiv.org/pdf/2303.00915.pdf.
  41. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650–9660, 2021.
  42. Vicregl: Self-supervised learning of local visual features. Advances in Neural Information Processing Systems, 35:8799–8810, 2022.
  43. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
  44. Improved baselines with visual instruction tuning, 2023a. URL http://arxiv.org/pdf/2310.03744.pdf.
  45. Visual instruction tuning, 2023b. URL http://arxiv.org/abs/2304.08485.
  46. Vindr-cxr: An open dataset of chest x-rays with radiologist’s annotations. Scientific Data, 9(1):429, 2022.
  47. Curation of the candid-ptx dataset with free-text reports. Radiology: Artificial Intelligence, 3(6):e210136, 2021.
  48. Rsna pneumonia detection challenge, 2018. URL https://kaggle.com/competitions/rsna-pneumonia-detection-challenge.
  49. Chest x-ray segmentation images based on mimic-cxr, 2022. URL https://physionet.org/content/lung-segment-mimic-cxr/1.0.0/.
  50. Chest imagenome dataset (version 1.0. 0). PhysioNet, 5:18, 2021.
  51. Vindr-ribcxr: A benchmark dataset for automatic segmentation and labeling of individual ribs on chest x-rays, 2021.
  52. Effect of image resolution on automated classification of chest x-rays. Journal of Medical Imaging, 10(4):044503–044503, 2023.
  53. The effect of image resolution on deep learning in radiography. Radiology: Artificial Intelligence, 2(1):e190015, 2020.
  54. Consensus, dissensus and synergy between clinicians and specialist foundation models in radiology report generation. arXiv preprint arXiv:2311.18260, 2023. URL https://arxiv.org/pdf/2311.18260.pdf.
  55. Effect of pre-training scale on intra-and inter-domain, full and few-shot transfer learning for natural and x-ray chest images. In 2022 International Joint Conference on Neural Networks (IJCNN), pages 1–9. IEEE, 2022. URL https://arxiv.org/pdf/2106.00116.pdf.
  56. Supervised transfer learning at scale for medical imaging. arXiv preprint arXiv:2101.05913, 2021. URL https://arxiv.org/pdf/2101.05913.pdf.
  57. PadChest: A large chest x-ray image dataset with multi-label annotated reports. Medical Image Analysis, 66:101797, December 2020. ISSN 1361-8415. doi:10.1016/j.media.2020.101797. URL http://dx.doi.org/10.1016/j.media.2020.101797.
  58. Self-supervised learning from images with a joint-embedding predictive architecture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15619–15629, 2023.
  59. iBOT: image BERT pre-training with online tokenizer. In International Conference on Learning Representations, 2022.
  60. On data scaling in masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10365–10374, 2023.
  61. Do lateral views help automated chest x-ray predictions? arXiv preprint arXiv:1904.08534, 2019. URL https://arxiv.org/pdf/1904.08534.pdf.
  62. Quantifying the value of lateral views in deep learning for chest x-rays. In Medical Imaging with Deep Learning, pages 288–303. PMLR, 2020. URL https://proceedings.mlr.press/v121/hashir20a/hashir20a.pdf.
  63. Unsupervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems, 33:9912–9924, 2020.
  64. Data augmentation for radiology report simplification. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1877–1887, 2023.
  65. Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81. Association for Computational Linguistics, July 2004. URL https://aclanthology.org/W04-1013.
  66. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics, July 2002. doi:10.3115/1073083.1073135.
  67. Improving the factual correctness of radiology report generation with semantic rewards. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4348–4360. ACL, December 2022. doi:10.18653/v1/2022.findings-emnlp.319.
  68. Combining automatic labelers and expert annotations for accurate radiology report labeling using BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1500–1519. ACL, November 2020. doi:10.18653/v1/2020.emnlp-main.117.
  69. CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2019), volume 33, pages 590–597. AAAI Press, July 2019. doi:10.1609/aaai.v33i01.3301590.
  70. Vila: On pre-training for visual language models. arXiv preprint arXiv:2312.07533, 2023a. URL https://arxiv.org/pdf/2312.07533.pdf.
  71. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117–2125, 2017.
  72. Medical image segmentation review: The success of u-net, 2022.
  73. nnu-net: Self-adapting framework for u-net-based medical image segmentation. arXiv preprint arXiv:1809.10486, 2018. URL https://arxiv.org/pdf/1809.10486.pdf.
  74. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2020. URL https://arxiv.org/pdf/1905.11946.pdf.
  75. OpenCLIP, September 2022. URL https://doi.org/10.5281/zenodo.7086307.
  76. A multi-objective segmentation method for chest x-rays based on collaborative learning from multiple partially annotated datasets. Information Fusion, 102:102016, 2024.
  77. Cams-net: An attention-guided feature selection network for rib segmentation in chest x-rays. Computers in Biology and Medicine, 156:106702, 2023b.
  78. A fully connected reproducible se-uresnet for multiorgan chest radiographs segmentation. In 2023 IEEE 24th International Conference on Information Reuse and Integration for Data Science (IRI), pages 261–266. IEEE, 2023.
  79. Semi-supervised multi-structure segmentation in chest x-ray imaging. In 2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS), pages 814–820. IEEE, 2023.
  80. Does body mass index outperform body weight as a surrogate parameter in the calculation of size-specific dose estimates in adult body ct? The British Journal of Radiology, 89(1059):20150734, 2016.
  81. Determining body height and weight from thoracic and abdominal ct localizers in pediatric and young adult patients using deep learning. Scientific Reports, 13(1):19010, 2023.
  82. A deep-learning method using computed tomography scout images for estimating patient body weight. Scientific reports, 11(1):15627, 2021.
  83. Mimic-iv, 2023. URL https://physionet.org/content/mimiciv/2.2/.
  84. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271–21284, 2020.
  85. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15750–15758, 2021.
  86. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000–16009, 2022.
  87. K-lite: Learning transferable visual models with external knowledge. Advances in Neural Information Processing Systems, 35:15558–15573, 2022.
  88. Filip: Fine-grained interactive language-image pre-training. In International Conference on Learning Representations, 2021.
  89. Imagebind: One embedding space to bind them all. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15180–15190, 2023.
  90. Coca: Contrastive captioners are image-text foundation models. Transactions on Machine Learning Research, 2022b. ISSN 2835-8856.
  91. SimVLM: Simple visual language model pretraining with weak supervision. In International Conference on Learning Representations, 2022b.
  92. Flava: A foundational language and vision alignment model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15638–15650, 2022.
  93. Image captioners are scalable vision learners too. arXiv preprint arXiv:2306.07915, 2023. URL https://arxiv.org/pdf/2306.07915.pdf.
  94. Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm. In International Conference on Learning Representations, 2021.
  95. Masked autoencoding does not help natural language supervision at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23432–23444, 2023.
  96. Slip: Self-supervision meets language-image pre-training. In European Conference on Computer Vision, pages 529–544. Springer, 2022.
  97. Expert-level detection of pathologies from unannotated chest x-ray images via self-supervised learning. Nat. Biomed. Eng 6, 1399–1406, 2022.
  98. Medklip: Medical knowledge enhanced language-image pre-training for x-ray diagnosis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 21372–21383, October 2023. URL https://openaccess.thecvf.com/content/ICCV2023/papers/Wu_MedKLIP_Medical_Knowledge_Enhanced_Language-Image_Pre-Training_for_X-ray_Diagnosis_ICCV_2023_paper.pdf.
  99. Pmc-clip: Contrastive language-image pre-training using biomedical documents. arXiv preprint arXiv:2303.07240, 2023b. URL https://arxiv.org/pdf/2303.07240.pdf.
  100. Medclip: Contrastive learning from unpaired medical images and text, 2022c. URL https://arxiv.org/pdf/2210.10163.pdf.
  101. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736, 2022.
  102. Big self-supervised models advance medical image classification. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3478–3488, 2021.
  103. Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging. Nature Biomedical Engineering, pages 1–24, 2023.
  104. Masked image modeling advances 3d medical image analysis. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1970–1980, 2023.
  105. Simmim: A simple framework for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9653–9663, 2022.
  106. Towards foundation models learned from anatomy in medical imaging via self-supervision. In MICCAI Workshop on Domain Adaptation and Representation Transfer, pages 94–104. Springer, 2023.
  107. Scaling self-supervised learning for histopathology with masked image modeling. medRxiv, pages 2023–07, 2023. URL https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2.full.pdf.
  108. Virchow: A million-slide digital pathology foundation model. arXiv preprint arXiv:2309.07778, 2023. URL https://arxiv.org/pdf/2309.07778.pdf.
  109. Deep learning for chest x-ray analysis: A survey. Medical Image Analysis, 72:102125, 2021.
  110. Simplified transfer learning for chest radiography models using less data. Radiology, 305(2):454–465, 2022.
  111. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2097–2106, 2017.
  112. Luke Oakden-Rayner. Exploring large-scale public medical image datasets. Academic radiology, 27(1):106–112, 2020.
  113. Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. Radiology, 294(2):421–431, 2020.
  114. Chexmask: a large-scale dataset of anatomical segmentation masks for multi-center chest x-ray images. arXiv preprint arXiv:2307.03293, 2023. URL https://arxiv.org/pdf/2307.03293.pdf.
  115. Pneumothorax detection in chest radiographs: optimizing artificial intelligence system for accuracy and confounding bias reduction using in-image annotations in algorithm training. European radiology, pages 1–13, 2021.
  116. Retrieval-based chest x-ray report generation using a pre-trained contrastive language-image model. In Machine Learning for Health, pages 209–219. PMLR, 2021.
  117. Improving factual completeness and consistency of image-to-text radiology report generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5288–5304, 2021.
  118. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  119. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. URL https://arxiv.org/pdf/2303.08774.pdf.
  120. Can generalist foundation models outcompete special-purpose tuning? case study in medicine. arXiv preprint arXiv:2311.16452, 2023. URL https://arxiv.org/pdf/2311.16452.pdf.
  121. From clip to dino: Visual encoders shout in multi-modal large language models, 2023. URL https://arxiv.org/pdf/2310.08825v1.pdf.
  122. BRAX, Brazilian labeled chest x-ray dataset. Scientific Data, 9(1):487, 2022.
  123. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Fernando Pérez-García (16 papers)
  2. Harshita Sharma (13 papers)
  3. Sam Bond-Taylor (10 papers)
  4. Kenza Bouzid (9 papers)
  5. Valentina Salvatelli (19 papers)
  6. Maximilian Ilse (11 papers)
  7. Shruthi Bannur (15 papers)
  8. Daniel C. Castro (28 papers)
  9. Anton Schwaighofer (13 papers)
  10. Matthew P. Lungren (43 papers)
  11. Maria Wetscherek (3 papers)
  12. Noel Codella (21 papers)
  13. Stephanie L. Hyland (20 papers)
  14. Javier Alvarez-Valle (19 papers)
  15. Ozan Oktay (34 papers)
Citations (9)
Youtube Logo Streamline Icon: https://streamlinehq.com