Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid (2212.14454v4)

Published 29 Dec 2022 in cs.AI and cs.CL

Abstract: Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose entities are associated with relevant images. However, current MMEA algorithms rely on KG-level modality fusion strategies for multi-modal entity representation, which ignores the variations of modality preferences of different entities, thus compromising robustness against noise in modalities such as blurry images and relations. This paper introduces MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, which dynamically predicts the mutual correlation coefficients among modalities for more fine-grained entity-level modality fusion and alignment. Experimental results demonstrate that our model not only achieves SOTA performance in multiple training scenarios, including supervised, unsupervised, iterative, and low-resource settings, but also has a limited number of parameters, efficient runtime, and interpretability. Our code is available at https://github.com/zjukg/MEAformer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Zhuo Chen (319 papers)
  2. Jiaoyan Chen (85 papers)
  3. Wen Zhang (170 papers)
  4. Lingbing Guo (27 papers)
  5. Yin Fang (32 papers)
  6. Yufeng Huang (14 papers)
  7. Yichi Zhang (184 papers)
  8. Yuxia Geng (22 papers)
  9. Jeff Z. Pan (78 papers)
  10. Wenting Song (3 papers)
  11. Huajun Chen (198 papers)
Citations (40)