Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models Show Human-like Social Desirability Biases in Survey Responses (2405.06058v2)

Published 9 May 2024 in cs.AI, cs.CL, cs.CY, and cs.HC

Abstract: As LLMs become widely used to model and simulate human behavior, understanding their biases becomes critical. We developed an experimental framework using Big Five personality surveys and uncovered a previously undetected social desirability bias in a wide range of LLMs. By systematically varying the number of questions LLMs were exposed to, we demonstrate their ability to infer when they are being evaluated. When personality evaluation is inferred, LLMs skew their scores towards the desirable ends of trait dimensions (i.e., increased extraversion, decreased neuroticism, etc). This bias exists in all tested models, including GPT-4/3.5, Claude 3, Llama 3, and PaLM-2. Bias levels appear to increase in more recent models, with GPT-4's survey responses changing by 1.20 (human) standard deviations and Llama 3's by 0.98 standard deviations-very large effects. This bias is robust to randomization of question order and paraphrasing. Reverse-coding all the questions decreases bias levels but does not eliminate them, suggesting that this effect cannot be attributed to acquiescence bias. Our findings reveal an emergent social desirability bias and suggest constraints on profiling LLMs with psychometric tests and on using LLMs as proxies for human participants.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. \JournalTitlearXiv:2206.07682 (2022).
  2. (PMLR), pp. 337–371 (2023).
  3. \JournalTitlePolitical Analysis 31, 337–351 (2023).
  4. \JournalTitleTrends in Cognitive Sciences (2023).
  5. \JournalTitleEuropean Journal of Personality 36, 809–824 (2022).
  6. \JournalTitlearXiv preprint arXiv:2307.00184 (2023).
  7. \JournalTitleProceedings of the National Academy of Sciences 121, e2313925121 (2024).
  8. N Schwarz, Self-reports: How the questions shape the answers. \JournalTitleAmerican psychologist 54, 93 (1999).
  9. \JournalTitleProceedings of the National Academy of Sciences 120, e2218523120 (2023).
  10. \JournalTitleProceedings of the National Academy of Sciences 120, e2309583120 (2023).
  11. T Holtgraves, Social desirability and self-reports: Testing models of socially desirable responding. \JournalTitlePersonality and Social Psychology Bulletin 30, 161–172 (2004).
  12. \JournalTitleCollabra: Psychology 7, 24431 (2021).
  13. \JournalTitleAssessment 29, 1422–1440 (2022).
  14. \JournalTitlePsychological bulletin 56, 81 (1959).
  15. (Association for Computational Linguistics, Toronto, Canada), pp. 202–214 (2023).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Aadesh Salecha (4 papers)
  2. Molly E. Ireland (1 paper)
  3. Shashanka Subrahmanya (2 papers)
  4. João Sedoc (64 papers)
  5. Lyle H. Ungar (16 papers)
  6. Johannes C. Eichstaedt (7 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com