Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Impact of Explanations on AI Competency Prediction in VQA (2007.00900v1)

Published 2 Jul 2020 in cs.CV, cs.AI, and cs.HC

Abstract: Explainability is one of the key elements for building trust in AI systems. Among numerous attempts to make AI explainable, quantifying the effect of explanations remains a challenge in conducting human-AI collaborative tasks. Aside from the ability to predict the overall behavior of AI, in many applications, users need to understand an AI agent's competency in different aspects of the task domain. In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA). We quantify users' understanding of competency, based on the correlation between the actual system performance and user rankings. We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT LLM. Each group of users sees only one kind of explanation to rank the competencies of the VQA model. The proposed model is evaluated through between-subject experiments to probe explanations' impact on the user's perception of competency. The comparison between two VQA models shows BERT based explanations and the use of object features improve the user's prediction of the model's competencies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kamran Alipour (7 papers)
  2. Arijit Ray (14 papers)
  3. Xiao Lin (181 papers)
  4. Yi Yao (49 papers)
  5. Giedrius T. Burachas (3 papers)
  6. Jurgen P. Schulze (4 papers)
Citations (9)