Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks (2105.03761v2)

Published 8 May 2021 in cs.CV, cs.CL, and cs.LG

Abstract: Recently, there has been an increasing number of efforts to introduce models capable of generating natural language explanations (NLEs) for their predictions on vision-language (VL) tasks. Such models are appealing, because they can provide human-friendly and comprehensive explanations. However, there is a lack of comparison between existing methods, which is due to a lack of re-usable evaluation frameworks and a scarcity of datasets. In this work, we introduce e-ViL and e-SNLI-VE. e-ViL is a benchmark for explainable vision-language tasks that establishes a unified evaluation framework and provides the first comprehensive comparison of existing approaches that generate NLEs for VL tasks. It spans four models and three datasets and both automatic metrics and human evaluation are used to assess model-generated explanations. e-SNLI-VE is currently the largest existing VL dataset with NLEs (over 430k instances). We also propose a new model that combines UNITER, which learns joint embeddings of images and text, and GPT-2, a pre-trained LLM that is well-suited for text generation. It surpasses the previous state of the art by a large margin across all datasets. Code and data are available here: https://github.com/maximek3/e-ViL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Maxime Kayser (5 papers)
  2. Oana-Maria Camburu (29 papers)
  3. Leonard Salewski (7 papers)
  4. Cornelius Emde (7 papers)
  5. Virginie Do (13 papers)
  6. Zeynep Akata (144 papers)
  7. Thomas Lukasiewicz (125 papers)
Citations (93)
X Twitter Logo Streamline Icon: https://streamlinehq.com