Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment (2307.16210v2)

Published 30 Jul 2023 in cs.AI, cs.CV, cs.LG, and cs.MM

Abstract: As a crucial extension of entity alignment (EA), multi-modal entity alignment (MMEA) aims to identify identical entities across disparate knowledge graphs (KGs) by exploiting associated visual information. However, existing MMEA approaches primarily concentrate on the fusion paradigm of multi-modal entity features, while neglecting the challenges presented by the pervasive phenomenon of missing and intrinsic ambiguity of visual images. In this paper, we present a further analysis of visual modality incompleteness, benchmarking latest MMEA models on our proposed dataset MMEA-UMVM, where the types of alignment KGs covering bilingual and monolingual, with standard (non-iterative) and iterative training paradigms to evaluate the model performance. Our research indicates that, in the face of modality incompleteness, models succumb to overfitting the modality noise, and exhibit performance oscillations or declines at high rates of missing modality. This proves that the inclusion of additional multi-modal data can sometimes adversely affect EA. To address these challenges, we introduce UMAEA , a robust multi-modal entity alignment approach designed to tackle uncertainly missing and ambiguous visual modalities. It consistently achieves SOTA performance across all 97 benchmark splits, significantly surpassing existing baselines with limited parameters and time consumption, while effectively alleviating the identified limitations of other models. Our code and benchmark data are available at https://github.com/zjukg/UMAEA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zhuo Chen (319 papers)
  2. Lingbing Guo (27 papers)
  3. Yin Fang (32 papers)
  4. Yichi Zhang (184 papers)
  5. Jiaoyan Chen (85 papers)
  6. Jeff Z. Pan (78 papers)
  7. Yangning Li (49 papers)
  8. Huajun Chen (198 papers)
  9. Wen Zhang (170 papers)
Citations (19)