Human-like adaptive group coordination and strategy use in LLMs

Determine whether large language models can demonstrate adaptive group coordination at a level comparable to humans, and ascertain whether the coordination strategies employed by large language models align with those used by human groups.

Background

LLMs are increasingly being used as agents in multi-agent systems intended to collaborate on tasks requiring coordinated action. While some studies show strengths in environments aligned with self-interest or with strong environmental structure, performance in pure coordination settings remains mixed.

This paper evaluates human and LLM groups on Group Binary Search, a common-interest game with imperfect monitoring, and finds systematic differences: humans benefit more from numerical feedback, improve across games, and stabilize their behavior, whereas LLMs exhibit overreactivity and persistent switching. These findings underscore the broader open question of whether LLMs can achieve human-comparable adaptive coordination and whether they rely on similar strategies.

References

As LLMs become more capable, it remains an open question whether they can demonstrate comparable adaptive coordination and whether they use the same strategies as humans.

High Volatility and Action Bias Distinguish LLMs from Humans in Group Coordination  (2604.02578 - Maini et al., 2 Apr 2026) in Abstract