Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large language models should not replace human participants because they can misportray and flatten identity groups (2402.01908v2)

Published 2 Feb 2024 in cs.CY

Abstract: LLMs are increasing in capability and popularity, propelling their application in new domains -- including as replacements for human participants in computational social science, user testing, annotation tasks, and more. In many settings, researchers seek to distribute their surveys to a sample of participants that are representative of the underlying human population of interest. This means in order to be a suitable replacement, LLMs will need to be able to capture the influence of positionality (i.e., relevance of social identities like gender and race). However, we show that there are two inherent limitations in the way current LLMs are trained that prevent this. We argue analytically for why LLMs are likely to both misportray and flatten the representations of demographic groups, then empirically show this on 4 LLMs through a series of human studies with 3200 participants across 16 demographic identities. We also discuss a third limitation about how identity prompts can essentialize identities. Throughout, we connect each limitation to a pernicious history that explains why it is harmful for marginalized demographic groups. Overall, we urge caution in use cases where LLMs are intended to replace human participants whose identities are relevant to the task at hand. At the same time, in cases where the goal is to supplement rather than replace (e.g., pilot studies), we provide inference-time techniques that we empirically demonstrate do reduce, but do not remove, these harms.

On the Limitations of LLMs in Portraying Identity Groups

The paper "LLMs cannot replace human participants because they cannot portray identity groups" provides a comprehensive analysis of the limitations of LLMs in accurately representing demographic identities. The authors explore the potential pitfalls and harms associated with replacing human participants with LLMs, emphasizing the importance of recognizing these limitations in real-world applications.

Technical and Ethical Limitations

This research explores two primary limitations of LLMs: misportrayal and flattening of demographic groups.

  1. Misportrayal: The authors demonstrate that LLMs often misrepresent demographic identities by producing responses akin to out-group imitations rather than in-group representations. This misrepresentation arises from the inherent nature of LLM training data, which rarely associates text with author demographics, leading to potential stereotyping. Empirical evidence from studies involving 3200 participants illustrates that LLM responses can closely align with stereotyped out-group portrayals, particularly for marginalized groups like non-binary individuals and those with disabilities.
  2. Flattening: LLMs tend to generate homogenous responses, failing to capture the diverse perspectives within a demographic group. This flattening effect is a result of the models being trained to produce the most likely outputs, thus erasing subgroup heterogeneity. Such homogenization is especially problematic for marginalized groups historically misportrayed as one-dimensional.

Implications and Alternatives

The paper cautions against the use of LLMs to replace human participants in scenarios where demographic identity is critical. It highlights the historical context of erasure and stereotyping, urging that current technological deployments do not repeat these harms.

In scenarios aiming to supplement rather than fully replace human inputs, the authors propose specific alternatives to mitigate these limitations:

  • Identity-Coded Names: Prompting LLMs with identity-coded names rather than explicit identity labels can yield more nuanced representations, particularly for intersectional identities like Black men and women.
  • Higher Temperature Settings: Adjusting the temperature hyperparameter during inference increases the diversity of LLM-generated responses, although it does not fully capture human-like variation.

Furthermore, for applications requiring increased response coverage, it is suggested to use alternative axes such as behavioral personas or political orientations rather than sensitive demographic attributes to avoid identity essentialization.

Broader Considerations

The paper emphasizes that the societal impact of deploying LLMs extends beyond technical limitations, touching on issues of autonomy and the potential amplification of social hierarchies. The authors advocate for careful consideration of the ethical implications involved in replacing human agency and lived experiences with machine-generated outputs.

Conclusion

This work critically examines the notion of replacing human participants with LLMs, providing a detailed account of the inherent limitations and associated harms. By offering viable alternatives and grounding their arguments in historical contexts of discrimination, the authors contribute valuable insights into responsible AI deployment. The findings underscore the need for continued scrutiny and ethical deliberation in the adoption of LLMs across diverse socio-technical settings. Future developments in AI must incorporate these considerations to ensure equitable and accurate representations of demographic identities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Angelina Wang (24 papers)
  2. Jamie Morgenstern (50 papers)
  3. John P. Dickerson (78 papers)
Citations (12)