Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Towards Critical Use: Learning to Situate AI Predictions Relative to Human Knowledge (2308.15700v1)

Published 30 Aug 2023 in cs.HC, cs.AI, and cs.LG

Abstract: A growing body of research has explored how to support humans in making better use of AI-based decision support, including via training and onboarding. Existing research has focused on decision-making tasks where it is possible to evaluate "appropriate reliance" by comparing each decision against a ground truth label that cleanly maps to both the AI's predictive target and the human decision-maker's goals. However, this assumption does not hold in many real-world settings where AI tools are deployed today (e.g., social work, criminal justice, and healthcare). In this paper, we introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model. To explore how training can support critical use, we conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening. We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making in this setting, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers. A qualitative examination of participants' explanations for their AI-assisted decisions revealed that they drew upon qualitative case narratives, to which the AI model did not have access, to learn when (not) to rely on AI predictions. Our findings open new questions for the study and design of training for real-world AI-assisted decision-making.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Anna Kawakami (11 papers)
  2. Luke Guerdan (9 papers)
  3. Yanghuidi Cheng (4 papers)
  4. Matthew Lee (18 papers)
  5. Scott Carter (10 papers)
  6. Nikos Arechiga (23 papers)
  7. Kate Glazko (4 papers)
  8. Haiyi Zhu (46 papers)
  9. Kenneth Holstein (37 papers)
Citations (6)