Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How to Answer Why -- Evaluating the Explanations of AI Through Mental Model Analysis (2002.02526v1)

Published 11 Jan 2020 in cs.HC and cs.AI

Abstract: To achieve optimal human-system integration in the context of user-AI interaction it is important that users develop a valid representation of how AI works. In most of the everyday interaction with technical systems users construct mental models (i.e., an abstraction of the anticipated mechanisms a system uses to perform a given task). If no explicit explanations are provided by a system (e.g. by a self-explaining AI) or other sources (e.g. an instructor), the mental model is typically formed based on experiences, i.e. the observations of the user during the interaction. The congruence of this mental model and the actual systems functioning is vital, as it is used for assumptions, predictions and consequently for decisions regarding system use. A key question for human-centered AI research is therefore how to validly survey users' mental models. The objective of the present research is to identify suitable elicitation methods for mental model analysis. We evaluated whether mental models are suitable as an empirical research method. Additionally, methods of cognitive tutoring are integrated. We propose an exemplary method to evaluate explainable AI approaches in a human-centered way.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Tim Schrills (2 papers)
  2. Thomas Franke (6 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.