Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? (2109.10052v1)

Published 21 Sep 2021 in cs.CL

Abstract: In this paper, we investigate what types of stereotypical information are captured by pretrained LLMs. We present the first dataset comprising stereotypical attributes of a range of social groups and propose a method to elicit stereotypes encoded by pretrained LLMs in an unsupervised fashion. Moreover, we link the emergent stereotypes to their manifestation as basic emotions as a means to study their emotional effects in a more generalized manner. To demonstrate how our methods can be used to analyze emotion and stereotype shifts due to linguistic experience, we use fine-tuning on news sources as a case study. Our experiments expose how attitudes towards different social groups vary across models and how quickly emotions and stereotypes can shift at the fine-tuning stage.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Rochelle Choenni (17 papers)
  2. Ekaterina Shutova (52 papers)
  3. Robert van Rooij (5 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.