Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EVJVQA Challenge: Multilingual Visual Question Answering (2302.11752v5)

Published 23 Feb 2023 in cs.CL

Abstract: Visual Question Answering (VQA) is a challenging task of NLP and computer vision (CV), attracting significant attention from researchers. English is a resource-rich language that has witnessed various developments in datasets and models for visual question answering. Visual question answering in other languages also would be developed for resources and models. In addition, there is no multilingual dataset targeting the visual content of a particular country with its own objects and cultural characteristics. To address the weakness, we provide the research community with a benchmark dataset named EVJVQA, including 33,000+ pairs of question-answer over three languages: Vietnamese, English, and Japanese, on approximately 5,000 images taken from Vietnam for evaluating multilingual VQA systems or models. EVJVQA is used as a benchmark dataset for the challenge of multilingual visual question answering at the 9th Workshop on Vietnamese Language and Speech Processing (VLSP 2022). This task attracted 62 participant teams from various universities and organizations. In this article, we present details of the organization of the challenge, an overview of the methods employed by shared-task participants, and the results. The highest performances are 0.4392 in F1-score and 0.4009 in BLUE on the private test set. The multilingual QA systems proposed by the top 2 teams use ViT for the pre-trained vision model and mT5 for the pre-trained LLM, a powerful pre-trained LLM based on the transformer architecture. EVJVQA is a challenging dataset that motivates NLP and CV researchers to further explore the multilingual models or systems for visual question answering systems. We released the challenge on the Codalab evaluation system for further research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433.
  2. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254.
  3. Zhaowei Cai and Nuno Vasconcelos. 2018. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6154–6162.
  4. All you may need for vqa are image captions. arXiv preprint arXiv:2205.01883.
  5. Towards multi-lingual visual question answering. arXiv preprint arXiv:2209.05401.
  6. Towards multi-lingual visual question answering.
  7. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
  8. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  9. Are you talking to a machine? dataset and methods for multilingual image question. Advances in neural information processing systems, 28.
  10. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904–6913.
  11. Akshay Kumar Gupta. 2017. Survey of visual question answering: Datasets and techniques. arXiv preprint arXiv:1705.03865.
  12. A unified framework for multilingual and code-mixed visual question answering. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 900–913.
  13. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3608–3617.
  14. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778.
  15. Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700–6709.
  16. In defense of grid features for visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10267–10276.
  17. Towards developing a multilingual and code-mixed visual question answering system by knowledge distillation. arXiv preprint arXiv:2109.04653.
  18. Delving deeper into cross-lingual visual question answering. arXiv preprint arXiv:2202.07630.
  19. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012–10022.
  20. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195–3204.
  21. Farhad Nooralahzadeh and Rico Sennrich. 2022. Improving the cross-lingual generalisation in visual question answering. arXiv preprint arXiv:2209.02982.
  22. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
  23. xgqa: Cross-lingual visual question answering. arXiv preprint arXiv:2109.06082.
  24. Dureadervis: A: A chinese dataset for open-domain document visual question answering. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1338–1351.
  25. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392.
  26. Exploring models and data for image question answering. Advances in neural information processing systems, 28.
  27. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.
  28. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252.
  29. A-okvqa: A benchmark for visual question answering using world knowledge. arXiv preprint arXiv:2206.01718.
  30. Human-adversarial visual question answering. Advances in Neural Information Processing Systems, 34:20346–20359.
  31. Visual question answering dataset for bilingual image understanding: A study of cross-lingual transfer using attention maps. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1918–1928.
  32. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317–8326.
  33. Tips and tricks for visual question answering: Learnings from the 2017 challenge. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4223–4232.
  34. ViVQA: Vietnamese visual question answering. In Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation, pages 683–691, Shanghai, China. Association for Computational Lingustics.
  35. Vivqa: Vietnamese visual question answering. In Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation, pages 546–554.
  36. Attention is all you need. Advances in neural information processing systems, 30.
  37. VnCoreNLP: A Vietnamese natural language processing toolkit. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 56–60, New Orleans, Louisiana. Association for Computational Linguistics.
  38. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4995–5004.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ngan Luu-Thuy Nguyen (56 papers)
  2. Nghia Hieu Nguyen (10 papers)
  3. Khanh Quoc Tran (4 papers)
  4. Kiet Van Nguyen (74 papers)
  5. Duong T. D Vo (1 paper)
Citations (5)