Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Advise Humans in High-Stakes Settings (2210.12849v3)

Published 23 Oct 2022 in cs.HC, cs.AI, and cs.CY

Abstract: Expert decision-makers (DMs) in high-stakes AI-assisted decision-making (AIaDM) settings receive and reconcile recommendations from AI systems before making their final decisions. We identify distinct properties of these settings which are key to developing AIaDM models that effectively benefit team performance. First, DMs incur reconciliation costs from exerting decision-making resources (e.g., time and effort) when reconciling AI recommendations that contradict their own judgment. Second, DMs in AIaDM settings exhibit algorithm discretion behavior (ADB), i.e., an idiosyncratic tendency to imperfectly accept or reject algorithmic recommendations for any given decision task. The human's reconciliation costs and imperfect discretion behavior introduce the need to develop AI systems which (1) provide recommendations selectively, (2) leverage the human partner's ADB to maximize the team's decision accuracy while regularizing for reconciliation costs, and (3) are inherently interpretable. We refer to the task of developing AI to advise humans in AIaDM settings as learning to advise and we address this task by first introducing the AI-assisted Team (AIaT)-Learning Framework. We instantiate our framework to develop TeamRules (TR): an algorithm that produces rule-based models and recommendations for AIaDM settings. TR is optimized to selectively advise a human and to trade-off reconciliation costs and team accuracy for a given environment by leveraging the human partner's ADB. Evaluations on synthetic and real-world benchmark datasets with a variety of simulated human accuracy and discretion behaviors show that TR robustly improves the team's objective across settings over interpretable, rule-based alternatives.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nicholas Wolczynski (3 papers)
  2. Maytal Saar-Tsechansky (15 papers)
  3. Tong Wang (144 papers)

Summary

We haven't generated a summary for this paper yet.