Human vs. Machine: Behavioral Differences Between Expert Humans and Language Models in Wargame Simulations (2403.03407v4)
Abstract: To some, the advent of AI promises better decision-making and increased military effectiveness while reducing the influence of human error and emotions. However, there is still debate about how AI systems, especially LLMs that can be applied to many tasks, behave compared to humans in high-stakes military decision-making scenarios with the potential for increased risks towards escalation. To test this potential and scrutinize the use of LLMs for such purposes, we use a new wargame experiment with 214 national security experts designed to examine crisis escalation in a fictional U.S.-China scenario and compare the behavior of human player teams to LLM-simulated team responses in separate simulations. Here, we find that the LLM-simulated responses can be more aggressive and significantly affected by changes in the scenario. We show a considerable high-level agreement in the LLM and human responses and significant quantitative and qualitative differences in individual actions and strategic tendencies. These differences depend on intrinsic biases in LLMs regarding the appropriate level of violence following strategic instructions, the choice of LLM, and whether the LLMs are tasked to decide for a team of players directly or first to simulate dialog between a team of players. When simulating the dialog, the discussions lack quality and maintain a farcical harmony. The LLM simulations cannot account for human player characteristics, showing no significant difference even for extreme traits, such as "pacifist" or "aggressive sociopath." When probing behavioral consistency across individual moves of the simulation, the tested LLMs deviated from each other but generally showed somewhat consistent behavior. Our results motivate policymakers to be cautious before granting autonomy or following AI-based strategy recommendations.
- Using large language models to simulate multiple humans and replicate human subject studies. In Proceedings of the 40th International Conference on Machine Learning, 2023.
- Y. Bai et al. Constitutional ai: Harmlessness from ai feedback. arXiv, 2212.08073, 2022.
- E. M. Bender et al. On the dangers of stochastic parrots: Can language models be too big? In Association for Computing Machinery: FAccT ’21, page 610–623, 2021.
- S. Biddle. Openai quietly deletes ban on using chatgpt for ‘military and warfare’. The Intercept, 2024.
- N. Brown and T. Sandholm. Superhuman ai for heads-up no-limit poker: Libratus beats top professionals. Science, 359:418–424, 2018.
- N. Brown and T. Sandholm. Superhuman ai for multiplayer poker. Science, 365:885–890, 2019.
- S. Casper et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv, 2307.15217, 2023.
- Using large language models in psychology. Nat Rev Psychol, 2:688–701, 2023.
- Can ai language models replace human participants? Trends in Cognitive Sciences, 27:597–600, 2023.
- Do personality tests generalize to large language models? In Socially Responsible Language Modelling Research (SoLaR) Workshop at NeurIPS, 2023.
- J. R. Emery. Moral choices without moral language: 1950s political-military wargaming at the rand corporation. Texas National Security Review, 2021. Fall 2021.
- Meta Fundamental AI Research FAIR et al. Human-level play in the game of diplomacy by combining language models with strategic reasoning. Science, 378:1067–1074, 2022.
- Strategic reasoning with language models. arXiv, 2305.19165, 2023.
- L. Griffin et al. Large language models respond to influence like humans. In Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023), pages 15–24. Association for Computational Linguistics, 2023.
- I. Grossmann et al. Ai and the transformation of social science research. Science, 380:1108–1109, 2023.
- Ai language models cannot replace human research participants. AI & Soc, 2023.
- W. Hoffman and H. M. Kim. Reducing the risks of artificial intelligence for military decision advantage. https://doi.org/10.51593/2021CA008, 2023. Center for Security and Emerging Technology.
- Highly accurate protein structure prediction with alphafold. Nature, 596:583–589, 2021.
- Champion-level drone racing using deep reinforcement learning. Nature, 620:982–987, 2023.
- E. Lin-Greenberg et al. Wargaming for international relations. European Journal of International Relations, 28(1):83–109, 2022.
- K. Manson. The us military is taking generative ai out for a spin. Bloomberg, 2023.
- Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015.
- OpenAI. Gpt4 technical report. https://cdn.openai.com/papers/gpt-4.pdf, 2023a.
- OpenAI. Models. https://platform.openai.com/docs/models/overview, 2023b.
- L. Ouyang et al. Training language models to follow instructions with human feedback. In 36th Conference on Neural Information Processing Systems, 2022.
- R. Rafailov et al. Direct preference optimization: Your language model is secretly a reward model. In Proceedings of the Thirty-seventh Conference on Neural Information Processing Systems, 2023.
- A. Reddie et al. Next generation wargames. Science, 362(6421):1362–1364, 2018.
- J. P. Rivera et al. Escalation risks from language models in military and diplomatic decision-making. arXiv, 2401.03408, 2024.
- S. Santurkar et al. Whose opinions do language models reflect? In Proceedings of the 40th International Conference on Machine Learning, 2023.
- M. Schmid et al. Student of games: A unified learning algorithm for both perfect and imperfect information games. Sci. Adv., 9:eadg3256, 2023.
- J. Schneider. What wargames really reveal. Foreign Affairs, December 2003.
- P. Schoenegger et al. Ai-augmented predictions: Llm assistants improve human forecasting accuracy. arXiv, 2402.07862, 2024.
- Role play with large language models. Nature, 623:493–498, 2023.
- Mastering the game of go with deep neural networks and tree search. Nature, 529:484–489, 2016.
- D. Silver et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362:1140–1144, 2018.
- Solving olympiad geometry without human demonstrations. Nature, 625:476–482, 2024.
- Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575:350–354, 2019.