Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human vs. Machine: Behavioral Differences Between Expert Humans and Language Models in Wargame Simulations (2403.03407v4)

Published 6 Mar 2024 in cs.CY, cs.AI, and cs.CL

Abstract: To some, the advent of AI promises better decision-making and increased military effectiveness while reducing the influence of human error and emotions. However, there is still debate about how AI systems, especially LLMs that can be applied to many tasks, behave compared to humans in high-stakes military decision-making scenarios with the potential for increased risks towards escalation. To test this potential and scrutinize the use of LLMs for such purposes, we use a new wargame experiment with 214 national security experts designed to examine crisis escalation in a fictional U.S.-China scenario and compare the behavior of human player teams to LLM-simulated team responses in separate simulations. Here, we find that the LLM-simulated responses can be more aggressive and significantly affected by changes in the scenario. We show a considerable high-level agreement in the LLM and human responses and significant quantitative and qualitative differences in individual actions and strategic tendencies. These differences depend on intrinsic biases in LLMs regarding the appropriate level of violence following strategic instructions, the choice of LLM, and whether the LLMs are tasked to decide for a team of players directly or first to simulate dialog between a team of players. When simulating the dialog, the discussions lack quality and maintain a farcical harmony. The LLM simulations cannot account for human player characteristics, showing no significant difference even for extreme traits, such as "pacifist" or "aggressive sociopath." When probing behavioral consistency across individual moves of the simulation, the tested LLMs deviated from each other but generally showed somewhat consistent behavior. Our results motivate policymakers to be cautious before granting autonomy or following AI-based strategy recommendations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Using large language models to simulate multiple humans and replicate human subject studies. In Proceedings of the 40th International Conference on Machine Learning, 2023.
  2. Y. Bai et al. Constitutional ai: Harmlessness from ai feedback. arXiv, 2212.08073, 2022.
  3. E. M. Bender et al. On the dangers of stochastic parrots: Can language models be too big? In Association for Computing Machinery: FAccT ’21, page 610–623, 2021.
  4. S. Biddle. Openai quietly deletes ban on using chatgpt for ‘military and warfare’. The Intercept, 2024.
  5. N. Brown and T. Sandholm. Superhuman ai for heads-up no-limit poker: Libratus beats top professionals. Science, 359:418–424, 2018.
  6. N. Brown and T. Sandholm. Superhuman ai for multiplayer poker. Science, 365:885–890, 2019.
  7. S. Casper et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv, 2307.15217, 2023.
  8. Using large language models in psychology. Nat Rev Psychol, 2:688–701, 2023.
  9. Can ai language models replace human participants? Trends in Cognitive Sciences, 27:597–600, 2023.
  10. Do personality tests generalize to large language models? In Socially Responsible Language Modelling Research (SoLaR) Workshop at NeurIPS, 2023.
  11. J. R. Emery. Moral choices without moral language: 1950s political-military wargaming at the rand corporation. Texas National Security Review, 2021. Fall 2021.
  12. Meta Fundamental AI Research FAIR et al. Human-level play in the game of diplomacy by combining language models with strategic reasoning. Science, 378:1067–1074, 2022.
  13. Strategic reasoning with language models. arXiv, 2305.19165, 2023.
  14. L. Griffin et al. Large language models respond to influence like humans. In Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023), pages 15–24. Association for Computational Linguistics, 2023.
  15. I. Grossmann et al. Ai and the transformation of social science research. Science, 380:1108–1109, 2023.
  16. Ai language models cannot replace human research participants. AI & Soc, 2023.
  17. W. Hoffman and H. M. Kim. Reducing the risks of artificial intelligence for military decision advantage. https://doi.org/10.51593/2021CA008, 2023. Center for Security and Emerging Technology.
  18. Highly accurate protein structure prediction with alphafold. Nature, 596:583–589, 2021.
  19. Champion-level drone racing using deep reinforcement learning. Nature, 620:982–987, 2023.
  20. E. Lin-Greenberg et al. Wargaming for international relations. European Journal of International Relations, 28(1):83–109, 2022.
  21. K. Manson. The us military is taking generative ai out for a spin. Bloomberg, 2023.
  22. Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015.
  23. OpenAI. Gpt4 technical report. https://cdn.openai.com/papers/gpt-4.pdf, 2023a.
  24. OpenAI. Models. https://platform.openai.com/docs/models/overview, 2023b.
  25. L. Ouyang et al. Training language models to follow instructions with human feedback. In 36th Conference on Neural Information Processing Systems, 2022.
  26. R. Rafailov et al. Direct preference optimization: Your language model is secretly a reward model. In Proceedings of the Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  27. A. Reddie et al. Next generation wargames. Science, 362(6421):1362–1364, 2018.
  28. J. P. Rivera et al. Escalation risks from language models in military and diplomatic decision-making. arXiv, 2401.03408, 2024.
  29. S. Santurkar et al. Whose opinions do language models reflect? In Proceedings of the 40th International Conference on Machine Learning, 2023.
  30. M. Schmid et al. Student of games: A unified learning algorithm for both perfect and imperfect information games. Sci. Adv., 9:eadg3256, 2023.
  31. J. Schneider. What wargames really reveal. Foreign Affairs, December 2003.
  32. P. Schoenegger et al. Ai-augmented predictions: Llm assistants improve human forecasting accuracy. arXiv, 2402.07862, 2024.
  33. Role play with large language models. Nature, 623:493–498, 2023.
  34. Mastering the game of go with deep neural networks and tree search. Nature, 529:484–489, 2016.
  35. D. Silver et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362:1140–1144, 2018.
  36. Solving olympiad geometry without human demonstrations. Nature, 625:476–482, 2024.
  37. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575:350–354, 2019.
Citations (3)

Summary

  • The paper compares expert human decision-making with LLM behaviors in simulated US-China crisis wargames.
  • The study uses dual-move experiments to expose both convergences and divergences in strategic decisions.
  • Findings indicate LLMs are instruction-sensitive and biased, urging caution before high-stakes deployment.

Analysis of "Human vs. Machine: LLMs and Wargames"

The paper "Human vs. Machine: LLMs and Wargames" by Lamparth et al. investigates the comparative behavior of human experts and LLMs in simulated military wargames. The paper situates itself at the intersection of artificial intelligence and international security, aiming to discern the viability of LLMs as substitutes for human decision-makers in high-stakes scenarios.

Context and Motivation

Wargames have historically played pivotal roles in shaping military strategies and decision-making processes. The rapid advancement of AI and LLMs has prompted discussions about their potential roles in strategic domains, including military applications. However, the debate persists on whether these models faithfully capture human decision-making nuances or could inadvertently escalate conflicts due to flawed interpretations.

Methodology

The authors conducted a wargame experiment set in a hypothetical 2026 US-China crisis involving AI-enabled weaponry. Human participants included 107 national security experts, while the LLM agents comprised two ChatGPT variants: GPT-3.5 and GPT-4. The wargame consisted of two moves, each requiring strategic decisions based on evolving scenarios.

The LLMs were tasked with simulating team discussions and decision-making. The paper assessed their responses for overlap with human decisions and examined systematic behavioral discrepancies. Various conditions, such as dialogue simulation length and player background information, were tested to evaluate model sensitivity.

Findings

  1. Alignment and Discrepancies: The paper identified a significant general behavioral overlap between LLM simulations and human participants. Particularly, on half of the evaluated actions, both decision entities concurred. Despite this overlap, substantial discrepancies were noted, with LLMs showing varying strategic preferences, often leaning towards more aggressive tactics—GPT-3.5 favoring automatic firing and GPT-4 preferring defensive stances.
  2. Instruction Sensitivity: Both LLMs demonstrated sensitivity to changes in instructions related to engagement priorities, albeit with distinct strategic biases. GPT-3.5 displayed a tendency towards violent actions, contrasting GPT-4's inclination for defensive measures.
  3. Dialog Simulation Quality: The simulated dialogue between LLM-generated participants lacked depth and disagreement, diverging from authentic human interactions. Both models produced rather harmonious exchanges, with GPT-4 showing slight improvements in discourse quality over GPT-3.5.
  4. Player Backgrounds: LLM simulations were largely insensitive to variations in player background information. Even when prompted with extreme character traits (e.g., "pacifists" or "aggressive sociopaths"), the models failed to reflect significant behavioral changes.

Implications

These findings have critical implications for the deployment of LLMs in military and strategic contexts. While LLMs demonstrate potential in replicating human decision-making to an extent, their unpredictable deviations and sensitivity to instruction nuances underscore the need for caution. LLMs, in their current form, possess inherent biases influenced by their training data, leading to unreliable and inconsistent strategic behavior.

The insights suggest potential enhancements such as fine-tuning LLMs with domain-specific data, although this may not fully resolve unpredictability issues. Future iterations might benefit from more sophisticated dialog simulations and refined player role simulations. Nevertheless, states should exercise diligence before integrating LLMs into crucial decision-making processes, given the current limitations in ensuring consistent and safe AI behavior.

Moving forward, research should aim to address these behavioral guarantees, focusing on formal verification methods that can scale to complex, generalized AI systems like LLMs. Without such assurances, reliance on LLMs for high-stake decisions remains a speculative venture. In the interim, these models can assist with broader, experimental or predictive exercises where simulations do not pose direct operational risks.

Conclusion

The paper by Lamparth et al. presents a comprehensive evaluation of LLMs in a strategic wargame context, highlighting both capabilities and cautionary limitations. As AI continues to evolve, ensuring reliability and predictability in high-stakes applications remains paramount.