Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey (2402.05391v4)
Abstract: Knowledge Graphs (KGs) play a pivotal role in advancing various AI applications, with the semantic web community's exploration into multi-modal dimensions unlocking new avenues for innovation. In this survey, we carefully review over 300 articles, focusing on KG-aware research in two principal aspects: KG-driven Multi-Modal (KG4MM) learning, where KGs support multi-modal tasks, and Multi-Modal Knowledge Graph (MM4KG), which extends KG studies into the MMKG realm. We begin by defining KGs and MMKGs, then explore their construction progress. Our review includes two primary task categories: KG-aware multi-modal learning tasks, such as Image Classification and Visual Question Answering, and intrinsic MMKG tasks like Multi-modal Knowledge Graph Completion and Entity Alignment, highlighting specific research trajectories. For most of these tasks, we provide definitions, evaluation benchmarks, and additionally outline essential insights for conducting relevant research. Finally, we discuss current challenges and identify emerging trends, such as progress in LLMing and Multi-modal Pre-training strategies. This survey aims to serve as a comprehensive reference for researchers already involved in or considering delving into KG and multi-modal learning research, offering insights into the evolving landscape of MMKG research and supporting future work.
- Zhuo Chen (319 papers)
- Yichi Zhang (184 papers)
- Yin Fang (32 papers)
- Yuxia Geng (22 papers)
- Lingbing Guo (27 papers)
- Xiang Chen (343 papers)
- Qian Li (236 papers)
- Wen Zhang (170 papers)
- Jiaoyan Chen (85 papers)
- Yushan Zhu (11 papers)
- Jiaqi Li (142 papers)
- Xiaoze Liu (22 papers)
- Jeff Z. Pan (78 papers)
- Ningyu Zhang (148 papers)
- Huajun Chen (198 papers)