Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi (2107.07630v3)

Published 15 Jul 2021 in cs.AI and cs.HC

Abstract: Deep reinforcement learning has generated superhuman AI in competitive games such as Go and StarCraft. Can similar learning techniques create a superior AI teammate for human-machine collaborative games? Will humans prefer AI teammates that improve objective team performance or those that improve subjective metrics of trust? In this study, we perform a single-blind evaluation of teams of humans and AI agents in the cooperative card game Hanabi, with both rule-based and learning-based agents. In addition to the game score, used as an objective metric of the human-AI team performance, we also quantify subjective measures of the human's perceived performance, teamwork, interpretability, trust, and overall preference of AI teammate. We find that humans have a clear preference toward a rule-based AI teammate (SmartBot) over a state-of-the-art learning-based AI teammate (Other-Play) across nearly all subjective metrics, and generally view the learning-based agent negatively, despite no statistical difference in the game score. This result has implications for future AI design and reinforcement learning benchmarking, highlighting the need to incorporate subjective metrics of human-AI teaming rather than a singular focus on objective task performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Ho Chit Siu (11 papers)
  2. Jaime D. Pena (127 papers)
  3. Edenna Chen (1 paper)
  4. Yutai Zhou (5 papers)
  5. Victor J. Lopez (1 paper)
  6. Kyle Palko (2 papers)
  7. Kimberlee C. Chang (1 paper)
  8. Ross E. Allen (6 papers)
Citations (44)