Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Social Desirability Response Bias in Large Language Models: Evidence from GPT-4 Simulations (2410.15442v1)

Published 20 Oct 2024 in cs.AI and cs.CY
Exploring Social Desirability Response Bias in Large Language Models: Evidence from GPT-4 Simulations

Abstract: LLMs are employed to simulate human-like responses in social surveys, yet it remains unclear if they develop biases like social desirability response (SDR) bias. To investigate this, GPT-4 was assigned personas from four societies, using data from the 2022 Gallup World Poll. These synthetic samples were then prompted with or without a commitment statement intended to induce SDR. The results were mixed. While the commitment statement increased SDR index scores, suggesting SDR bias, it reduced civic engagement scores, indicating an opposite trend. Additional findings revealed demographic associations with SDR scores and showed that the commitment statement had limited impact on GPT-4's predictive performance. The study underscores potential avenues for using LLMs to investigate biases in both humans and LLMs themselves.

Exploring Social Desirability Response Bias in LLMs: Evidence from GPT-4 Simulations

The paper, "Exploring Social Desirability Response Bias in LLMs: Evidence from GPT-4 Simulations," investigates the potential of LLMs, specifically GPT-4, to emulate social desirability response (SDR) bias—a tendency often noted in human social survey responses. The paper employs a nuanced methodology to assess whether GPT-4 can both simulate human-like responses and replicate inherent biases observed in such contexts.

Methodology and Results

The authors conducted a series of experiments using GPT-4, assigning it personas based on data from the 2022 Gallup World Poll across four different societies: Hong Kong, South Africa, the United Kingdom, and the United States. The personas were exposed to a commitment statement—an intervention used to induce SDR bias—and tasked with responding to survey questions.

Key findings include:

  • SDR Index Variation: The presence of a commitment statement led to a significant increase in the SDR index scores, suggesting potential SDR bias. Conversely, scores on the civic engagement (CE) index were reduced, indicating an unexpected reverse trend.
  • Demographic Associations: Older and better-educated synthetic personas exhibited greater SDR bias, correlating well with existing human subject studies. Moreover, the paper uncovered a moderation effect, where the impact of the commitment statement on SDR was more pronounced among older individuals.
  • Predictive Performance: The influence of a commitment statement on GPT-4’s predictive performance was limited overall. The model showed consistent response patterns across different societies, but discrepancies were noted in accurately predicting civic activities, particularly in the USA and UK.

Implications and Future Directions

The research highlights the importance of understanding bias manifestations in LLMs. While the paper underscores that LLMs can reflect complex, nuanced human biases, it also reveals contradictory patterns in SDR simulations, demonstrating the complexity of mimicking human psyches. The results imply that, while GPT-4 can emulate some demographic tendencies towards SDR, other areas, such as cultural nuances, present challenges.

Theoretical Implications: This paper raises questions about how LLMs interpret social cues embedded in textual prompts and how these interpretations might deviate from human patterns. It also suggests a need to further explore the dual role of commitment statements, which may heighten awareness and alter response tendencies in ways that differ from traditional assumptions.

Practical Implications: For practitioners employing LLMs in generating survey responses or similar tasks, awareness of such biases—and their variable manifestations—becomes critical in ensuring accurate data interpretation. Developers are prompted to consider cultural sensitivity and demographic diversity in training datasets.

Future Research: The interdisciplinary nature of this paper opens avenues for additional exploration into how LLMs process and exhibit biases, particularly across diverse cultural contexts. Future work should aim to quantify how similar biases manifest across different model architectures and datasets, enhancing the robustness of LLMs’ applications in social sciences.

In conclusion, while the paper provides valuable insights into the capabilities and limitations of LLMs in simulating human-like biases, it also underscores the complexity and multidimensionality of these phenomena. The research encourages a continued, detailed exploration to refine LLM applications in understanding and replicating intricate human behaviors and biases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sanguk Lee (2 papers)
  2. Kai-Qi Yang (1 paper)
  3. Tai-Quan Peng (10 papers)
  4. Ruth Heo (1 paper)
  5. Hui Liu (481 papers)
Citations (1)