Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

XEQ Scale for Evaluating XAI Experience Quality Grounded in Psychometric Theory (2407.10662v3)

Published 15 Jul 2024 in cs.AI and cs.HC

Abstract: Explainable Artificial Intelligence (XAI) aims to improve the transparency of autonomous decision-making through explanations. Recent literature has emphasised users' need for holistic "multi-shot" explanations and the ability to personalise their engagement with XAI systems. We refer to this user-centred interaction as an XAI Experience. Despite advances in creating XAI experiences, evaluating them in a user-centred manner has remained challenging. To address this, we introduce the XAI Experience Quality (XEQ) Scale (pronounced "Seek" Scale), for evaluating the user-centred quality of XAI experiences. Furthermore, XEQ quantifies the quality of experiences across four evaluation dimensions: learning, utility, fulfilment and engagement. These contributions extend the state-of-the-art of XAI evaluation, moving beyond the one-dimensional metrics frequently developed to assess single-shot explanations. In this paper, we present the XEQ scale development and validation process, including content validation with XAI experts as well as discriminant and construct validation through a large-scale pilot study. Out pilot study results offer strong evidence that establishes the XEQ Scale as a comprehensive framework for evaluating user-centred XAI experiences.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Anjana Wijekoon (12 papers)
  2. Nirmalie Wiratunga (14 papers)
  3. David Corsar (7 papers)
  4. Kyle Martin (8 papers)
  5. Ikechukwu Nkisi-Orji (4 papers)
  6. Belen Díaz-Agudo (2 papers)
  7. Derek Bridge (9 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets