Performance and Metacognition Disconnect when Reasoning in Human-AI Interaction (2409.16708v2)
Abstract: Optimizing human-AI interaction requires users to reflect on their own performance critically. Our paper examines whether people using AI to complete tasks can accurately monitor how well they perform. In Study 1, participants (N = 246) used AI to solve 20 logical problems from the Law School Admission Test. While their task performance improved by three points compared to a norm population, participants overestimated their performance by four points. Interestingly, higher AI literacy was linked to less accurate self-assessment. Participants with more technical knowledge of AI were more confident but less precise in judging their own performance. Using a computational model, we explored individual differences in metacognitive accuracy and found that the Dunning-Kruger effect, usually observed in this task, ceased to exist with AI. Study 2 (N = 452) replicates these findings. We discuss how AI levels metacognitive performance and consider consequences of performance overestimation for interactive AI systems enhancing cognition.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.