Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Challenge of Using LLMs to Simulate Human Behavior: A Causal Inference Perspective (2312.15524v2)

Published 24 Dec 2023 in cs.AI, cs.IR, econ.EM, and stat.AP

Abstract: LLMs have shown impressive potential to simulate human behavior. We identify a fundamental challenge in using them to simulate experiments: when LLM-simulated subjects are blind to the experimental design (as is standard practice with human subjects), variations in treatment systematically affect unspecified variables that should remain constant, violating the unconfoundedness assumption. Using demand estimation as a context and an actual experiment as a benchmark, we show this can lead to implausible results. While confounding may in principle be addressed by controlling for covariates, this can compromise ecological validity in the context of LLM simulations: controlled covariates become artificially salient in the simulated decision process, which introduces focalism. This trade-off between unconfoundedness and ecological validity is usually absent in traditional experimental design and represents a unique challenge in LLM simulations. We formalize this challenge theoretically, showing it stems from ambiguous prompting strategies, and hence cannot be fully addressed by improving training data or by fine-tuning. Alternative approaches that unblind the experimental design to the LLM show promise. Our findings suggest that effectively leveraging LLMs for experimental simulations requires fundamentally rethinking established experimental design practices rather than simply adapting protocols developed for human subjects.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies. In Proceedings of the 40th International Conference on Machine Learning, pages 337–371. PMLR. ISSN: 2640-3498.
  2. Out of One, Many: Using Language Models to Simulate Human Samples. Political Analysis, pages 1–15. Publisher: Cambridge University Press.
  3. Do Instrumental Variables Belong in Propensity Scores? Working Paper 343, National Bureau of Economic Research.
  4. Using GPT for Market Research.
  5. Uniform Pricing in U.S. Retail Chains*. The Quarterly Journal of Economics, 134(4):2011–2084.
  6. Can AI language models replace human participants? Trends in Cognitive Sciences, 0(0). Publisher: Elsevier.
  7. Language, Time Preferences, and Consumer Behavior: Evidence from Large Language Models. arXiv:2305.02531 [cs, econ, q-fin].
  8. Prices and promotions in U.S. retail markets. Quantitative Marketing and Economics, 19(3):289–368.
  9. Horton, J. J. (2023). Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus? arXiv:2301.07543 [econ, q-fin].
  10. Generative Agents: Interactive Simulacra of Human Behavior. arXiv:2304.03442 [cs].
  11. VanderWeele, T. J. (2019). Principles of confounder selection. European Journal of Epidemiology, 34(3):211–219.
Citations (20)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com