Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating Explanations Through LLMs: Beyond Traditional User Studies (2410.17781v1)

Published 23 Oct 2024 in cs.AI

Abstract: As AI becomes fundamental in sectors like healthcare, explainable AI (XAI) tools are essential for trust and transparency. However, traditional user studies used to evaluate these tools are often costly, time consuming, and difficult to scale. In this paper, we explore the use of LLMs to replicate human participants to help streamline XAI evaluation. We reproduce a user study comparing counterfactual and causal explanations, replicating human participants with seven LLMs under various settings. Our results show that (i) LLMs can replicate most conclusions from the original study, (ii) different LLMs yield varying levels of alignment in the results, and (iii) experimental factors such as LLM memory and output variability affect alignment with human responses. These initial findings suggest that LLMs could provide a scalable and cost-effective way to simplify qualitative XAI evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Francesco Bombassei De Bona (1 paper)
  2. Gabriele Dominici (10 papers)
  3. Tim Miller (53 papers)
  4. Marc Langheinrich (9 papers)
  5. Martin Gjoreski (8 papers)
Citations (1)