Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Language Models Exhibit Social Identity Biases (2310.15819v2)

Published 24 Oct 2023 in cs.CL and cs.CY

Abstract: The surge in popularity of LLMs has given rise to concerns about biases that these models could learn from humans. We investigate whether ingroup solidarity and outgroup hostility, fundamental social identity biases known from social psychology, are present in 56 LLMs. We find that almost all foundational LLMs and some instruction fine-tuned models exhibit clear ingroup-positive and outgroup-negative associations when prompted to complete sentences (e.g., "We are..."). Our findings suggest that modern LLMs exhibit fundamental social identity biases to a similar degree as humans, both in the lab and in real-world conversations with LLMs, and that curating training data and instruction fine-tuning can mitigate such biases. Our results have practical implications for creating less biased large-LLMs and further underscore the need for more research into user interactions with LLMs to prevent potential bias reinforcement in humans.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tiancheng Hu (13 papers)
  2. Yara Kyrychenko (4 papers)
  3. Steve Rathje (1 paper)
  4. Nigel Collier (83 papers)
  5. Sander van der Linden (6 papers)
  6. Jon Roozenbeek (2 papers)
Citations (14)