Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making (2102.09692v1)

Published 19 Feb 2021 in cs.HC and cs.AI

Abstract: People supported by AI-powered decision support tools frequently overrely on the AI: they accept an AI's suggestion even when that suggestion is wrong. Adding explanations to the AI decisions does not appear to reduce the overreliance and some studies suggest that it might even increase it. Informed by the dual-process theory of cognition, we posit that people rarely engage analytically with each individual AI recommendation and explanation, and instead develop general heuristics about whether and when to follow the AI suggestions. Building on prior research on medical decision-making, we designed three cognitive forcing interventions to compel people to engage more thoughtfully with the AI-generated explanations. We conducted an experiment (N=199), in which we compared our three cognitive forcing designs to two simple explainable AI approaches and to a no-AI baseline. The results demonstrate that cognitive forcing significantly reduced overreliance compared to the simple explainable AI approaches. However, there was a trade-off: people assigned the least favorable subjective ratings to the designs that reduced the overreliance the most. To audit our work for intervention-generated inequalities, we investigated whether our interventions benefited equally people with different levels of Need for Cognition (i.e., motivation to engage in effortful mental activities). Our results show that, on average, cognitive forcing interventions benefited participants higher in Need for Cognition more. Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.

Cognitive Forcing Functions in AI-assisted Decision-Making

The paper "To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making" investigates the challenges associated with overreliance on AI in decision-making, particularly in scenarios where human operators tend to accept AI-generated suggestions without critical evaluation, even when such suggestions are incorrect. The research introduces cognitive forcing functions as a potential solution to mitigate this problem by encouraging more analytical engagement with AI-provided information.

Study Overview and Methodology

The paper is grounded in the dual-process theory of cognition, which distinguishes between heuristic, fast-thinking processes (System 1) and more deliberate, analytical processes (System 2). The authors posit that AI explanations often fail to engage System 2 thinking among users, leading to persistent overreliance on AI systems. To address this, the researchers designed three cognitive forcing interventions aimed at enhancing user engagement with AI explanations:

  • On Demand: A configuration where AI suggestions are shown only when the user actively requests them.
  • Update: Users must make an initial decision before receiving AI recommendations, which they can then update if desired.
  • Wait: A delay is introduced before AI suggestions and explanations are offered, prompting users to consider their responses in the interim.

The researchers conducted an experiment involving 199 participants, comparing these cognitive forcing designs against more traditional explainable AI approaches and a control condition with no AI assistance.

Key Findings

  1. Performance Improvement in Error Detection: Cognitive forcing functions significantly reduce overreliance on incorrect AI predictions when compared to explainable AI methods without such interventions. Nonetheless, full elimination of overreliance was not achieved, as participants still occasionally followed incorrect AI suggestions even under these conditions.
  2. Efficiency vs. Usability Trade-off: The more effective interventions at reducing overreliance were rated lower in terms of user preference and perceived complexity. This suggests that there is an inherent trade-off between the usability and cognitive demand of these systems and their effectiveness in encouraging analytical engagement.
  3. Impact of Need for Cognition (NFC): The benefits of cognitive forcing interventions were more pronounced among individuals with high NFC—a measure of one's intrinsic motivation to engage in cognitive tasks. This group showed better performance improvements when employing cognitive forcing functions compared to those with low NFC, indicating a potential disparity in benefits based on individual cognitive traits.

Implications and Future Directions

The findings of this paper have several theoretical and practical implications. Theoretically, it highlights the limitation of current explainable AI systems that assume users will fully engage with explanations, and it emphasizes the importance of cognitive motivation in human-AI interactions. Practically, the research suggests that AI systems should be designed not only to explain their recommendations but also to actively involve users in the decision-making process through cognitive forcing functions. Furthermore, the need to account for individual differences in cognitive motivation suggests the potential for personalized AI interaction designs.

For future developments, the paper points toward integrating adaptive cognitive forcing strategies that balance the trade-offs between effectiveness and usability. Such strategies could involve selective application of cognitive forcing based on situational need or user characteristics, dynamically adjusting the level of intervention to optimize both user engagement and decision quality. This approach could lead to AI systems that not only support decision-making more effectively but also inclusively, catering to users with varying levels of cognitive motivation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Zana Buçinca (9 papers)
  2. Maja Barbara Malaya (1 paper)
  3. Krzysztof Z. Gajos (15 papers)
Citations (219)
X Twitter Logo Streamline Icon: https://streamlinehq.com