Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

xGQA: Cross-Lingual Visual Question Answering (2109.06082v2)

Published 13 Sep 2021 in cs.CL

Abstract: Recent advances in multimodal vision and LLMing have predominantly focused on the English language, mostly due to the lack of multilingual multimodal datasets to steer modeling efforts. In this work, we address this gap and provide xGQA, a new multilingual evaluation benchmark for the visual question answering task. We extend the established English GQA dataset to 7 typologically diverse languages, enabling us to detect and explore crucial challenges in cross-lingual visual question answering. We further propose new adapter-based approaches to adapt multimodal transformer-based models to become multilingual, and -- vice versa -- multilingual models to become multimodal. Our proposed methods outperform current state-of-the-art multilingual multimodal models (e.g., M3P) in zero-shot cross-lingual settings, but the accuracy remains low across the board; a performance drop of around 38 accuracy points in target languages showcases the difficulty of zero-shot cross-lingual transfer for this task. Our results suggest that simple cross-lingual transfer of multimodal models yields latent multilingual multimodal misalignment, calling for more sophisticated methods for vision and multilingual LLMing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jonas Pfeiffer (34 papers)
  2. Gregor Geigle (12 papers)
  3. Aishwarya Kamath (11 papers)
  4. Jan-Martin O. Steitz (4 papers)
  5. Stefan Roth (97 papers)
  6. Ivan Vulić (130 papers)
  7. Iryna Gurevych (264 papers)
Citations (47)