Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review (2303.06471v3)

Published 11 Mar 2023 in cs.LG

Abstract: Cancer has relational information residing at varying scales, modalities, and resolutions of the acquired data, such as radiology, pathology, genomics, proteomics, and clinical records. Integrating diverse data types can improve the accuracy and reliability of cancer diagnosis and treatment. There can be disease-related information that is too subtle for humans or existing technological tools to discern visually. Traditional methods typically focus on partial or unimodal information about biological systems at individual scales and fail to encapsulate the complete spectrum of the heterogeneous nature of data. Deep neural networks have facilitated the development of sophisticated multimodal data fusion approaches that can extract and integrate relevant information from multiple sources. Recent deep learning frameworks such as Graph Neural Networks (GNNs) and Transformers have shown remarkable success in multimodal learning. This review article provides an in-depth analysis of the state-of-the-art in GNNs and Transformers for multimodal data fusion in oncology settings, highlighting notable research studies and their findings. We also discuss the foundations of multimodal learning, inherent challenges, and opportunities for integrative learning in oncology. By examining the current state and potential future developments of multimodal data integration in oncology, we aim to demonstrate the promising role that multimodal neural networks can play in cancer prevention, early detection, and treatment through informed oncology practices in personalized settings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (166)
  1. R. L. Siegel, K. D. Miller, N. S. Wagle, and A. Jemal, “Cancer Statistics, 2023,” CA: A Cancer Journal for Clinicians, vol. 73, no. 1, pp. 17–48, 2023.
  2. H.-P. Chan, L. M. Hadjiiski, and R. K. Samala, “Computer-aided diagnosis in the era of deep learning,” Medical Physics, vol. 47, no. 5, pp. e218–e227, 2020.
  3. D. W. Hook, S. J. Porter, and C. Herzog, “Dimensions: Building Context for Search and Evaluation,” Frontiers in Research Metrics and Analytics, vol. 3, p. 23, 2018.
  4. Y. Huang, C. Du, Z. Xue, X. Chen, H. Zhao, and L. Huang, “What makes multi-modal learning better than single (provably),” in Advances in Neural Information Processing Systems, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., 2021.
  5. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever, “Learning Transferable Visual Models From Natural Language Supervision,” 2021.
  6. OpenAI, “GPT-4,” 2023, available at: https://openai.com/research/gpt-4. Last accessed on April 5, 2023.
  7. A. Singh, R. Hu, V. Goswami, G. Couairon, W. Galuba, M. Rohrbach, and D. Kiela, “FLAVA: A Foundational Language And Vision Alignment Model,” 2021.
  8. S. Tripathi, E. J. Moyer, A. I. Augustin, A. Zavalny, S. Dheer, R. Sukumaran, D. Schwartz, B. Gorski, F. Dako, and E. Kim, “RadGenNets: Deep learning-based radiogenomics model for gene mutation prediction in lung cancer,” Informatics in Medicine Unlocked, vol. 33, p. 101062, 2022.
  9. K. M. Boehm, P. Khosravi, R. Vanguri, J. Gao, and S. P. Shah, “Harnessing multimodal data integration to advance precision oncology,” Nature Reviews Cancer, pp. 1–13, 2021.
  10. P. Xu, X. Zhu, and D. A. Clifton, “Multimodal learning with transformers: A survey,” 2022.
  11. T. Baltrušaitis, C. Ahuja, and L.-P. Morency, “Multimodal machine learning: A survey and taxonomy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 2, pp. 423–443, 2018.
  12. Y. Ektefaie, G. Dasoulas, A. Noori, M. Farhat, and M. Zitnik, “Multimodal learning with graphs,” Nature Machine Intelligence, pp. 1–11, 2023.
  13. M. Khan, I. Ashraf, M. Alhaisoni, R. Damaševičius, R. Scherer, A. Rehman, and S. Bukhari, “Multimodal brain tumor classification using deep learning and robust feature selection: A machine learning application for radiologists,” Diagnostics (Basel), vol. 10, no. 8, p. 565, 2020.
  14. S. Schulz, A.-C. Woerl, F. Jungmann, C. Glasner, P. Stenzel, S. Strobl, A. Fernandez, D.-C. Wagner, A. Haferkamp, P. Mildenberger, W. Roth, and S. Foersch, “Multimodal deep learning for prognosis prediction in renal cancer,” Frontiers in Oncology, vol. 11, 2021.
  15. S. Joo, E. Ko, S. Kwon, E. Jeon, H. Jung, J. Kim, M. Chung, and Y. Im, “Multimodal deep learning models for the prediction of pathologic response to neoadjuvant chemotherapy in breast cancer,” Sci Rep, vol. 11, no. 1, p. 18800, 2021.
  16. W. C. Sleeman IV, R. Kapoor, and P. Ghosh, “Multimodal classification: Current landscape, taxonomy and future directions,” ACM Computing Surveys, vol. 55, no. 7, pp. 1–31, 2022.
  17. R. Miotto, L. Li, B. A. Kidd, and J. T. Dudley, “Deep patient: an unsupervised representation to predict the future of patients from the electronic health records,” Scientific reports, vol. 6, no. 1, pp. 1–10, 2016.
  18. A. Waqas, D. Dera, G. Rasool, N. C. Bouaynaya, and H. M. Fathallah-Shaykh, “Brain tumor segmentation and surveillance with deep artificial neural networks,” Deep Learning for Biomedical Data Analysis, pp. 311–350, 2021.
  19. G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,” Medical image analysis, vol. 42, pp. 60–88, 2017.
  20. C. Angermueller, H. J. Lee, W. Reik, and O. Stegle, “DeepCpG: accurate prediction of single-cell DNA methylation states using deep learning,” Genome biology, vol. 18, no. 1, pp. 1–13, 2017.
  21. S. Yousefi, F. Amrollahi, M. Amgad, C. Dong, J. E. Lewis, C. Song, D. A. Gutman, S. H. Halani, J. E. Velazquez Vega, D. J. Brat et al., “Predicting clinical outcomes from large scale cancer genomic profiles with deep survival models,” Scientific reports, vol. 7, no. 1, 2017.
  22. S. Wang, S. Sun, Z. Li, R. Zhang, and J. Xu, “Accurate de novo prediction of protein contact map by ultra-deep learning model,” PLoS computational biology, vol. 13, no. 1, p. e1005324, 2017.
  23. D. Paul and N. L. Komarova, “Multi-scale network targeting: A holistic systems-biology approach to cancer treatment,” Progress in biophysics and molecular biology, vol. 165, pp. 72–79, 2021.
  24. L. Talman, E. Agmon, S. M. Peirce, and M. W. Covert, “Multiscale models of infection,” Current Opinion in Biomedical Engineering, vol. 11, pp. 102–108, 2019.
  25. J. Liu, P. Pandya, and S. Afshar, “Therapeutic advances in oncology,” International Journal of Molecular Sciences, vol. 22, no. 4, 2021.
  26. [Online]. Available: https://pdc.cancer.gov/
  27. R. L. Grossman, A. P. Heath, V. Ferretti, H. E. Varmus, D. R. Lowy, W. A. Kibbe, and L. M. Staudt, “Toward a shared vision for cancer genomic data,” New England Journal of Medicine, vol. 375, no. 12, p. 1109–1112, 2016.
  28. S. P. Rowe and M. G. Pomper, “Molecular imaging in oncology: Current impact and future directions,” CA: A Cancer Journal for Clinicians, vol. 72, no. 4, pp. 333–352, 2022.
  29. M. Quinn, J. Forman, M. Harrod, S. Winter, K. E. Fowler, S. L. Krein, A. Gupta, S. Saint, H. Singh, and V. Chopra, “Electronic health records, communication, and data sharing: challenges and opportunities for improving the diagnostic process,” Diagnosis, vol. 6, no. 3, pp. 241–248, 2019.
  30. O. Asan, A. B. Nattinger, A. P. Gurses, J. T. Tyszka, and T. W. Yen, “Oncologists’ Views Regarding the Role of Electronic Health Records in Care Coordination,” JCO Clinical Cancer Informatics, no. 2, pp. 1–12, 2018, pMID: 30652555.
  31. K. K. Al-jabery, T. Obafemi-Ajayi, G. R. Olbricht, and D. C. Wunsch II, “Data preprocessing,” in Computational Learning Approaches to Data Analytics in Biomedical Applications, K. K. Al-jabery, T. Obafemi-Ajayi, G. R. Olbricht, and D. C. Wunsch II, Eds.   Academic Press, 2020, pp. 7–27.
  32. C. V. Gonzalez Zelaya, “Towards explaining the effects of data preprocessing on machine learning,” in IEEE 35th International Conference on Data Engineering (ICDE), 2019, pp. 2086–2090.
  33. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” 2018.
  34. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” 2019. [Online]. Available: https://arxiv.org/abs/1907.11692
  35. D. Wang, J. Su, and H. Yu, “Feature extraction and analysis of natural language processing for deep learning english language,” IEEE Access, vol. 8, pp. 46 335–46 345, 2020.
  36. S. Dara and P. Tumma, “Feature extraction by using deep learning: A survey,” in 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), 2018.
  37. P. Jiang, S. Sinha, K. Aldape, S. Hannenhalli, C. Sahinalp, and E. Ruppin, “Big Data in basic and translational cancer research,” Nature Reviews Cancer, vol. 22, no. 11, p. 625–639, 2022.
  38. V. Borisov, T. Leemann, K. Seßler, J. Haug, M. Pawelczyk, and G. Kasneci, “Deep Neural Networks and Tabular Data: A Survey,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–21, 2022.
  39. R. Correa, N. Alonso-Pupo, and E. Rodríguez, “Multi-omics data integration approaches for precision oncology,” Molecular Omics, 2022.
  40. D. E. Cahall, G. Rasool, N. C. Bouaynaya, and H. M. Fathallah-Shaykh, “Inception modules enhance brain tumor segmentation,” Frontiers in computational neuroscience, vol. 13, p. 44, 2019.
  41. K. Syed, W. C. Sleeman IV, M. Hagan, J. Palta, R. Kapoor, and P. Ghosh, “Multi-view data integration methods for radiotherapy structure name standardization,” Cancers, vol. 13, no. 8, p. 1796, 2021.
  42. T. Liu, J. Huang, T. Liao, R. Pu, S. Liu, and Y. Peng, “A hybrid deep learning model for predicting molecular subtypes of human breast cancer using multimodal data,” Irbm, vol. 43, no. 1, pp. 62–74, 2022.
  43. J. Song, Y. Zheng, M. Zakir Ullah, J. Wang, Y. Jiang, C. Xu, Z. Zou, and G. Ding, “Multiview multimodal network for breast cancer diagnosis in contrast-enhanced spectral mammography images,” International Journal of Computer Assisted Radiology and Surgery, vol. 16, no. 6, pp. 979–988, 2021.
  44. J. Yap, W. Yolland, and P. Tschandl, “Multimodal skin lesion classification using deep learning,” Experimental dermatology, vol. 27, no. 11, pp. 1261–1267, 2018.
  45. X. Ma and F. Jia, “Brain tumor classification with multimodal MR and pathology images,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 5th International Workshop, BrainLes 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Revised Selected Papers, Part II 5.   Springer, 2020, pp. 343–352.
  46. C. Jansen, R. N. Ramirez, N. C. El-Ali, D. Gomez-Cabrero, J. Tegner, M. Merkenschlager, A. Conesa, and A. Mortazavi, “Building gene regulatory networks from scATAC-seq and scRNA-seq using linked self organizing maps,” PLoS computational biology, vol. 15, no. 11, p. e1006555, 2019.
  47. S. G. Stark, J. Ficek, F. Locatello, X. Bonilla, S. Chevrier, F. Singer, G. Rätsch, and K.-V. Lehmann, “SCIM: universal single-cell matching with unpaired feature sets,” Bioinformatics, vol. 36, no. Supplement_2, pp. i919–i927, 2020.
  48. Z.-J. Cao and G. Gao, “Multi-omics single-cell data integration and regulatory inference with graph-linked embedding,” Nature Biotechnology, vol. 40, no. 10, pp. 1458–1466, 2022.
  49. Z. Zhang, C. Yang, and X. Zhang, “scDART: integrating unmatched scRNA-seq and scATAC-seq data and learning cross-modality relationship simultaneously,” Genome Biology, vol. 23, no. 1, p. 139, 2022.
  50. K. D. Yang, A. Belyaeva, S. Venkatachalapathy, K. Damodaran, A. Katcoff, A. Radhakrishnan, G. Shivashankar, and C. Uhler, “Multi-domain translation between single-cell imaging and sequencing data using autoencoders,” Nature communications, vol. 12, no. 1, p. 31, 2021.
  51. Y. Xu, P. Das, and R. P. McCord, “SMILE: mutual information learning for integration of single-cell omics data,” Bioinformatics, vol. 38, no. 2, pp. 476–486, 2022.
  52. M. M. Li, K. Huang, and M. Zitnik, “Graph representation learning in biomedicine and healthcare,” Nature Biomedical Engineering, pp. 1–17, 2022.
  53. H. Farooq, Y. Chen, T. T. Georgiou, A. Tannenbaum, and C. Lenglet, “Network curvature as a hallmark of brain structural connectivity,” Nature communications, vol. 10, no. 1, pp. 1–11, 2019.
  54. A. Derrow-Pinion, J. She, D. Wong, O. Lange, T. Hester, L. Perez, M. Nunkesser, S. Lee, X. Guo, B. Wiltshire et al., “ETA prediction with graph neural networks in google maps,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 3767–3776.
  55. A. Waqas, H. Farooq, N. C. Bouaynaya, and G. Rasool, “Exploring robust architectures for deep artificial neural networks,” Communications Engineering, vol. 1, no. 1, p. 46, 2022.
  56. L. Waikhom and R. Patgiri, “A survey of graph neural networks in various learning paradigms: methods, applications, and challenges,” Artificial Intelligence Review, pp. 1–70, 2022.
  57. T. Yang, L. Hu, C. Shi, H. Ji, X. Li, and L. Nie, “HGAT: Heterogeneous graph attention networks for semi-supervised short text classification,” ACM Transactions on Information Systems (TOIS), vol. 39, no. 3, pp. 1–29, 2021.
  58. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A Comprehensive Survey on Graph Neural Networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, pp. 4–24, 2020.
  59. L. Jiao, J. Chen, F. Liu, S. Yang, C. You, X. Liu, L. Li, and B. Hou, “Graph representation learning meets computer vision: A survey,” IEEE Transactions on Artificial Intelligence, 2022.
  60. W. L. Hamilton, “Graph representation learning,” Synthesis Lectures on Artifical Intelligence and Machine Learning, vol. 14, no. 3, 2020.
  61. Y. Ma, X. Liu, T. Zhao, Y. Liu, J. Tang, and N. Shah, “A unified view on graph neural networks as graph signal denoising,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 1202–1211.
  62. D. Jin, C. Huo, J. Dang, P. Zhu, W. Zhang, W. Pedrycz, and L. Wu, “Heterogeneous graph neural networks using self-supervised reciprocally contrastive learning,” arXiv preprint arXiv:2205.00256, 2022.
  63. H.-C. Yi, Z.-H. You, D.-S. Huang, and C. K. Kwoh, “Graph representation learning in bioinformatics: trends, methods and applications,” Briefings in Bioinformatics, vol. 23, no. 1, p. bbab340, 2022.
  64. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017.
  65. J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in International conference on machine learning.   PMLR, 2017, pp. 1263–1272.
  66. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  67. M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” Advances in neural information processing systems, vol. 29, 2016.
  68. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
  69. J. Park, J. Cho, H. J. Chang, and J. Y. Choi, “Unsupervised hyperbolic representation learning via message passing auto-encoders,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5516–5526.
  70. L. Ma, Z. Yang, Y. Miao, J. Xue, M. Wu, L. Zhou, and Y. Dai, “NeuGraph: Parallel Deep Neural Network Computation on Large Graphs.” in USENIX Annual Technical Conference, 2019, pp. 443–458.
  71. A. Sankar, Y. Wu, L. Gou, W. Zhang, and H. Yang, “Dynamic graph representation learning via self-attention networks,” arXiv preprint arXiv:1812.09430, 2018.
  72. S. Bai, F. Zhang, and P. H. Torr, “Hypergraph convolution and hypergraph attention,” Pattern Recognition, vol. 110, p. 107637, 2021.
  73. Y. Wei, X. Wang, L. Nie, X. He, R. Hong, and T.-S. Chua, “MMGCN: Multi-modal graph convolution network for personalized recommendation of micro-video,” in Proceedings of the 27th ACM international conference on multimedia, 2019, pp. 1437–1445.
  74. R. J. Chen, M. Y. Lu, J. Wang, D. F. Williamson, S. J. Rodig, N. I. Lindeman, and F. Mahmood, “Pathomic Fusion: an integrated framework for fusing histopathology and genomic features for cancer diagnosis and prognosis,” IEEE Transactions on Medical Imaging, vol. 41, no. 4, pp. 757–770, 2020.
  75. G. Singh, S. Manjila, N. Sakla, A. True, A. H. Wardeh, N. Beig, A. Vaysberg, J. Matthews, P. Prasanna, and V. Spektor, “Radiomics and radiogenomics in gliomas: a contemporary update,” British Journal of Cancer, vol. 125, no. 5, pp. 641–657, 2021.
  76. D. Ahmedt-Aristizabal, M. A. Armin, S. Denman, C. Fookes, and L. Petersson, “Graph-based deep learning for medical diagnosis and analysis: past, present and future,” Sensors, vol. 21, p. 4758, 2021.
  77. ——, “A survey on graph-based deep learning for computational histopathology,” Computerized Medical Imaging and Graphics, vol. 95, p. 102027, 2022.
  78. D. Anand, S. Gadiya, and A. Sethi, “Histographs: graphs in histopathology,” in Medical Imaging 2020: Digital Pathology, vol. 11320.   SPIE, 2020, pp. 150–155.
  79. J. Wang, R. J. Chen, M. Y. Lu, A. Baras, and F. Mahmood, “Weakly supervised prostate TMA classification via graph convolutional networks,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI).   IEEE, 2020, pp. 239–243.
  80. Y. Wang, Y. G. Wang, C. Hu, M. Li, Y. Fan, N. Otter, I. Sam, H. Gou, Y. Hu, T. Kwok et al., “Cell graph neural networks enable the precise prediction of patient survival in gastric cancer,” NPJ precision oncology, vol. 6, no. 1, p. 45, 2022.
  81. S. Mo, M. Cai, L. Lin, R. Tong, Q. Chen, F. Wang, H. Hu, Y. Iwamoto, X.-H. Han, and Y.-W. Chen, “Multimodal priors guided segmentation of liver lesions in MRI using mutual information based graph co-attention networks,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part IV 23.   Springer, 2020, pp. 429–438.
  82. H. Du, J. Feng, and M. Feng, “Zoom in to where it matters: a hierarchical graph based model for mammogram analysis,” arXiv preprint arXiv:1912.07517, 2019.
  83. C.-H. Chao, Z. Zhu, D. Guo, K. Yan, T.-Y. Ho, J. Cai, A. P. Harrison, X. Ye, J. Xiao, A. Yuille et al., “Lymph node gross tumor volume detection in oncology imaging via relationship learning using graph neural network,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VII 23.   Springer, 2020, pp. 772–782.
  84. Z. Tian, X. Li, Y. Zheng, Z. Chen, Z. Shi, L. Liu, and B. Fei, “Graph-convolutional-network-based interactive prostate segmentation in MR images,” Medical physics, vol. 47, no. 9, pp. 4164–4176, 2020.
  85. C. Saueressig, A. Berkley, E. Kang, R. Munbodh, and R. Singh, “Exploring graph-based neural networks for automatic brain tumor segmentation,” in From Data to Models and Back: 9th International Symposium, DataMod 2020, Virtual Event, October 20, 2020, Revised Selected Papers 9.   Springer, 2021, pp. 18–37.
  86. A. Fout, J. Byrd, B. Shariat, and A. Ben-Hur, “Protein interface prediction using graph convolutional networks,” Advances in neural information processing systems, vol. 30, 2017.
  87. Y.-H. Feng, S.-W. Zhang, and J.-Y. Shi, “DPDDI: a deep predictor for drug-drug interactions,” BMC bioinformatics, vol. 21, pp. 1–15, 2020.
  88. P. Li, J. Wang, Y. Qiao, H. Chen, Y. Yu, X. Yao, P. Gao, G. Xie, and S. Song, “An effective self-supervised framework for learning expressive molecular global representations to drug discovery,” Briefings in Bioinformatics, vol. 22, no. 6, p. bbab109, 2021.
  89. E. Choi, M. T. Bahadori, L. Song, W. F. Stewart, and J. Sun, “GRAM: graph-based attention model for healthcare representation learning,” in Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 2017, pp. 787–795.
  90. Q. Song, J. Su, and W. Zhang, “scGCN is a graph convolutional networks algorithm for knowledge transfer in single cell omics,” Nature communications, vol. 12, no. 1, p. 3826, 2021.
  91. J. Wang, A. Ma, Y. Chang, J. Gong, Y. Jiang, R. Qi, C. Wang, H. Fu, Q. Ma, and D. Xu, “scGNN is a novel graph neural network framework for single-cell RNA-Seq analyses,” Nature communications, vol. 12, no. 1, p. 1882, 2021.
  92. H. Cui, P. Xuan, Q. Jin, M. Ding, B. Li, B. Zou, Y. Xu, B. Fan, W. Li, J. Yu et al., “Co-graph attention reasoning based imaging and clinical features integration for lymph node metastasis prediction,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part V 24.   Springer, 2021, pp. 657–666.
  93. Y. Huang and A. C. Chung, “Edge-variational graph convolutional networks for uncertainty-aware disease prediction,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VII 23.   Springer, 2020, pp. 562–572.
  94. T. Wang, W. Shao, Z. Huang, H. Tang, J. Zhang, Z. Ding, and K. Huang, “MOGONET integrates multi-omics data using graph convolutional networks allowing patient classification and biomarker identification,” Nature communications, vol. 12, no. 1, p. 3445, 2021.
  95. J. Lian, J. Deng, E. S. Hui, M. Koohi-Moghadam, Y. She, C. Chen, and V. Vardhanabhuti, “Early stage NSCLS patients’ prognostic prediction with multi-information using transformer and graph neural network model,” eLife, vol. 11, p. e80547, oct 2022.
  96. H. Wen, J. Ding, W. Jin, Y. Wang, Y. Xie, and J. Tang, “Graph neural networks for multimodal single-cell data integration,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 4153–4163.
  97. P. Pati, G. Jaume, L. A. Fernandes, A. Foncubierta-Rodríguez, F. Feroce, A. M. Anniciello, G. Scognamiglio, N. Brancati, D. Riccio, M. Di Bonito et al., “HACT-Net: A Hierarchical Cell-to-Tissue Graph Neural Network for Histopathological Image Classification,” in Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis: Second International Workshop, UNSURE 2020, and Third International Workshop, GRAIL 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, Proceedings 2.   Springer, 2020, pp. 208–219.
  98. J. Rao, X. Zhou, Y. Lu, H. Zhao, and Y. Yang, “Imputing single-cell RNA-seq data by combining graph convolution and autoencoder neural networks,” Iscience, vol. 24, no. 5, p. 102393, 2021.
  99. Y. Zeng, X. Zhou, J. Rao, Y. Lu, and Y. Yang, “Accurately clustering single-cell RNA-seq data by capturing structural relations between cells through graph convolutional network,” in 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM).   IEEE, 2020.
  100. M. Chatzianastasis, M. Vazirgiannis, and Z. Zhang, “Explainable multilayer graph neural network for cancer gene prediction,” arXiv preprint arXiv:2301.08831, 2023.
  101. M. Tortora, E. Cordelli, R. Sicilia, L. Nibid, E. Ippolito, G. Perrone, S. Ramella, and P. Soda, “Radiopathomics: Multimodal learning in non-small cell lung cancer for adaptive radiotherapy,” IEEE Access, 2023.
  102. M. Zitnik, M. Agrawal, and J. Leskovec, “Modeling polypharmacy side effects with graph convolutional networks,” Bioinformatics, vol. 34, no. 13, pp. i457–i466, 2018.
  103. T. Nguyen, H. Le, T. P. Quinn, T. Nguyen, T. D. Le, and S. Venkatesh, “GraphDTA: predicting drug-target binding affinity with graph neural networks,” Bioinformatics, vol. 37, no. 8, pp. 1140–1147, 2021.
  104. J. Shi, R. Wang, Y. Zheng, Z. Jiang, and L. Yu, “Graph convolutional networks for cervical cell classification,” in MICCAI 2019 Computational Pathology Workshop COMPAY, 2019.
  105. Y.-D. Zhang, S. C. Satapathy, D. S. Guttery, J. M. Górriz, and S.-H. Wang, “Improved breast cancer classification through combining graph convolutional network and convolutional neural network,” Information Processing & Management, vol. 58, no. 2, p. 102439, 2021.
  106. B. Rozemberczki, A. Gogleva, S. Nilsson, G. Edwards, A. Nikolov, and E. Papa, “MOOMIN: Deep Molecular Omics Network for Anti-Cancer Drug Combination Therapy,” in Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 3472–3483.
  107. D. Leng, L. Zheng, Y. Wen, Y. Zhang, L. Wu, J. Wang, M. Wang, Z. Zhang, S. He, and X. Bo, “A benchmark study of deep learning-based multi-omics data fusion methods for cancer,” Genome Biology, vol. 23, no. 1, pp. 1–32, 2022.
  108. N. Rajadhyaksha and A. Chitkara, “Graph contrastive learning for multi-omics data,” arXiv preprint arXiv:2301.02242, 2023.
  109. M.-K. Park, J.-M. Lim, J. Jeong, Y. Jang, J.-W. Lee, J.-C. Lee, H. Kim, E. Koh, S.-J. Hwang, H.-G. Kim et al., “Deep-Learning Algorithm and Concomitant Biomarker Identification for NSCLC Prediction Using Multi-Omics Data Integration,” Biomolecules, vol. 12, p. 1839, 2022.
  110. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  111. D. W. Otter, J. R. Medina, and J. K. Kalita, “A survey of the usages of deep learning for natural language processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, pp. 604–624, 2021.
  112. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” 2020.
  113. N. Zhang, “Learning adversarial transformer for symbolic music generation,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–10, 2020.
  114. S. Ahmed, I. E. Nielsen, A. Tripathi, S. Siddiqui, G. Rasool, and R. P. Ramachandran, “Transformers in time-series analysis: A tutorial,” arXiv preprint arXiv:2205.01138, 2022.
  115. J. Ma, J. Liu, Q. Lin, B. Wu, Y. Wang, and Y. You, “Multitask learning for visual question answering,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–15, 2021.
  116. A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., “Improving language understanding by generative pre-training,” 2018.
  117. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer,” The Journal of Machine Learning Research, vol. 21, no. 1, jan 2020.
  118. M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension,” in Annual Meeting of the Association for Computational Linguistics, 2019.
  119. T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ilić, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, M. Gallé et al., “BLOOM: A 176B-Parameter Open-Access Multilingual Language Model,” arXiv preprint arXiv:2211.05100, 2022.
  120. C. Jia, Y. Yang, Y. Xia, Y.-T. Chen, Z. Parekh, H. Pham, Q. Le, Y.-H. Sung, Z. Li, and T. Duerig, “Scaling up visual and vision-language representation learning with noisy text supervision,” in International Conference on Machine Learning.   PMLR, 2021, pp. 4904–4916.
  121. J. Yu, Z. Wang, V. Vasudevan, L. Yeung, M. Seyedhosseini, and Y. Wu, “Coca: Contrastive captioners are image-text foundation models,” Transactions on Machine Learning Research, 2022. [Online]. Available: https://openreview.net/forum?id=Ee277P3AYC
  122. R. Hu and A. Singh, “UniT: Multimodal Multitask Learning With a Unified Transformer,” 2021.
  123. J. Tang, K. Li, M. Hou, X. Jin, W. Kong, Y. Ding, and Q. Zhao, “MMT: Multi-way Multi-modal Transformer for Multimodal Learning,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22.   International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 3458–3465.
  124. J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds, R. Ring, E. Rutherford, S. Cabi, T. Han, Z. Gong, S. Samangooei, M. Monteiro, J. Menick, S. Borgeaud, A. Brock, A. Nematzadeh, S. Sharifzadeh, M. Binkowski, R. Barreira, O. Vinyals, A. Zisserman, and K. Simonyan, “Flamingo: a Visual Language Model for Few-Shot Learning,” 2022.
  125. A. Jaegle, S. Borgeaud, J.-B. Alayrac, C. Doersch, C. Ionescu, D. Ding, S. Koppula, D. Zoran, A. Brock, E. Shelhamer, O. Hénaff, M. M. Botvinick, A. Zisserman, O. Vinyals, and J. Carreira, “Perceiver IO: A General Architecture for Structured Inputs & Outputs,” 2021.
  126. K. Han, Y. Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y. Tang, A. Xiao, C. Xu, Y. Xu, Z. Yang, Y. Zhang, and D. Tao, “A Survey on Vision Transformer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 1, pp. 87–110, 2023.
  127. A. Galassi, M. Lippi, and P. Torroni, “Attention in Natural Language Processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 10, pp. 4291–4308, 2021.
  128. R. J. Chen, M. Y. Lu, W.-H. Weng, T. Y. Chen, D. F. Williamson, T. Manz, M. Shady, and F. Mahmood, “Multimodal co-attention transformer for survival prediction in gigapixel whole slide images,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 3995–4005.
  129. Y. Xie, J. Zhang, C. Shen, and Y. Xia, “CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image Segmentation,” 2021.
  130. J. Shang, T. Ma, C. Xiao, and J. Sun, “Pre-training of graph augmented transformers for medication recommendation,” 2019.
  131. L. Rasmy, Y. Xiang, Z. Xie, C. Tao, and D. Zhi, “Med-BERT: Pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction,” npj Digital Medicine, vol. 4, no. 1, 2021.
  132. E. Kaczmarek, A. Jamzad, T. Imtiaz, J. Nanayakkara, N. Renwick, and P. Mousavi, “Multi-omic graph transformers for cancer classification and interpretation,” in PACIFIC SYMPOSIUM ON BIOCOMPUTING 2022.   World Scientific, 2021, pp. 373–384.
  133. Y. Ji, Z. Zhou, H. Liu, and R. V. Davuluri, “DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome,” bioRxiv, 2020.
  134. J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, and J. Kang, “BioBERT: a pre-trained biomedical language representation model for biomedical text mining,” Bioinformatics, vol. 36, no. 4, pp. 1234–1240, sep 2019.
  135. M. E. Kalfaoglu, S. Kalkan, and A. A. Alatan, “Late Temporal Modeling in 3D CNN Architectures with BERT for Action Recognition,” in Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16.   Springer, 2020, pp. 731–747.
  136. Z. Zhong, D. Schneider, M. Voit, R. Stiefelhagen, and J. Beyerer, “Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation,” in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 6057–6066.
  137. P. Li, J. Gu, J. Kuen, V. I. Morariu, H. Zhao, R. Jain, V. Manjunatha, and H. Liu, “SelfDoc: Self-Supervised Document Representation Learning,” 2021.
  138. R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-Resolution Image Synthesis with Latent Diffusion Models,” 2021.
  139. H. Zhu, X. Sun, Y. Li, K. Ma, S. K. Zhou, and Y. Zheng, “DFTR: Depth-supervised Fusion Transformer for Salient Object Detection,” 2022.
  140. L. Yang, T. L. J. Ng, B. Smyth, and R. Dong, “HTML: Hierarchical Transformer-Based Multi-Task Learning for Volatility Prediction,” in Proceedings of The Web Conference 2020, ser. WWW ’20.   New York, NY, USA: Association for Computing Machinery, 2020, p. 441–451.
  141. B. Zhao, M. Gong, and X. Li, “Hierarchical multimodal transformer to summarize videos,” Neurocomputing, vol. 468, pp. 360–369, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925231221015253
  142. H. Sun, J. Liu, S. Chai, Z. Qiu, L. Lin, X. Huang, and Y. Chen, “Multi-Modal Adaptive Fusion Transformer Network for the estimation of depression level,” Sensors, vol. 21, no. 14, p. 4764, 2021.
  143. Z. Shao, H. Bian, Y. Chen, Y. Wang, J. Zhang, X. Ji et al., “TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  144. J. Liang, C. Yang, M. Zeng, and X. Wang, “TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images,” Quantitative Imaging in Medicine and Surgery, vol. 12, no. 4, 2022.
  145. Y. Ektefaie, G. Dasoulas, A. Noori, M. Farhat, and M. Zitnik, “Geometric multimodal representation learning,” arXiv preprint arXiv:2209.03299, 2023.
  146. C. Sun, A. Shrivastava, S. Singh, and A. Gupta, “Revisiting unreasonable effectiveness of data in deep learning era,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 843–852.
  147. B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li, “YFCC100M: The new data in multimedia research,” Communications of the ACM, vol. 59, pp. 64–73, 2016.
  148. P. Jiang, S. Sinha, K. Aldape, S. Hannenhalli, C. Sahinalp, and E. Ruppin, “Big data in basic and translational cancer research,” Nature Reviews Cancer, vol. 22, no. 11, pp. 625–639, 2022.
  149. J. Lipkova, R. J. Chen, B. Chen, M. Y. Lu, M. Barbieri, D. Shao, A. J. Vaidya, C. Chen, L. Zhuang, D. F. Williamson, M. Shaban, T. Y. Chen, and F. Mahmood, “Artificial intelligence for multimodal data integration in oncology,” Cancer Cell, vol. 40, pp. 1095–1110, 2022.
  150. M. Zhao, X. Huang, J. Jiang, L. Mou, D.-M. Yan, and L. Ma, “Accurate Registration of Cross-Modality Geometry via Consistent Clustering,” IEEE Transactions on Visualization and Computer Graphics, 2023.
  151. P. P. Liang, A. Zadeh, and L.-P. Morency, “Foundations and recent trends in multimodal machine learning: Principles, challenges, and open questions,” arXiv preprint arXiv:2209.03430, 2022.
  152. H. Wei, L. Feng, X. Chen, and B. An, “Combating noisy labels by agreement: A joint training method with co-regularization,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 13 726–13 735.
  153. B. Mirza, W. Wang, J. Wang, H. Choi, N. C. Chung, and P. Ping, “Machine learning and integrative analysis of biomedical big data,” Genes, vol. 10, no. 2, p. 87, 2019.
  154. P. Li, Y. Yang, M. Pagnucco, and Y. Song, “Explainability in graph neural networks: An experimental survey,” arXiv preprint arXiv:2203.09258, 2022.
  155. Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “GNNExplainer: Generating Explanations for Graph Neural Networks,” Advances in neural information processing systems, vol. 32, 2019.
  156. M. Vu and M. T. Thai, “PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks,” Advances in neural information processing systems, vol. 33, pp. 12 225–12 235, 2020.
  157. H. Yuan, H. Yu, J. Wang, K. Li, and S. Ji, “On explainability of graph neural networks via subgraph explorations,” in International Conference on Machine Learning.   PMLR, 2021, pp. 12 241–12 252.
  158. E. Remmer, “Explainability methods for transformer-based artificial neural networks: a comparative analysis,” 2022.
  159. H. Zhang, B. Wu, X. Yuan, S. Pan, H. Tong, and J. Pei, “Trustworthy graph neural networks: Aspects, methods and trends,” arXiv preprint arXiv:2205.07424, 2022.
  160. D. Valsesia, G. Fracastoro, and E. Magli, “RAN-GNNs: Breaking the Capacity Limits of Graph Neural Networks,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  161. A. Javaloy, M. Meghdadi, and I. Valera, “Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization,” in International Conference on Machine Learning.   PMLR, 2022, pp. 9938–9964.
  162. Y. Huang, J. Lin, C. Zhou, H. Yang, and L. Huang, “Modality competition: What makes joint training of multi-modal network fail in deep learning?(provably),” in International Conference on Machine Learning.   PMLR, 2022, pp. 9226–9259.
  163. C. Fritz, E. Dorigatti, and D. Rügamer, “Combining graph neural networks and spatio-temporal disease models to improve the prediction of weekly covid-19 cases in germany,” Scientific Reports, vol. 12, no. 1, p. 3930, 2022.
  164. S. Pati, U. Baid, B. Edwards, M. Sheller, S.-H. Wang, G. A. Reina, P. Foley, A. Gruzdev, D. Karkada, C. Davatzikos et al., “Federated learning enables big data for rare cancer boundary detection,” Nature communications, vol. 13, no. 1, p. 7346, 2022.
  165. S. Ahmed, D. Dera, S. U. Hassan, N. Bouaynaya, and G. Rasool, “Failure detection in deep neural networks for medical imaging,” Frontiers in Medical Technology, vol. 4, 2022.
  166. D. Dera, N. C. Bouaynaya, G. Rasool, R. Shterenberg, and H. M. Fathallah-Shaykh, “PremiUm-CNN: Propagating Uncertainty Towards Robust Convolutional Neural Networks,” IEEE Transactions on Signal Processing, vol. 69, pp. 4669–4684, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Asim Waqas (7 papers)
  2. Aakash Tripathi (8 papers)
  3. Ravi P. Ramachandran (9 papers)
  4. Paul Stewart (10 papers)
  5. Ghulam Rasool (32 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets