Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Simple dynamic word embeddings for mapping perceptions in the public sphere (1904.03352v2)

Published 6 Apr 2019 in cs.CY

Abstract: Word embeddings trained on large-scale historical corpora can illuminate human biases and stereotypes that perpetuate social inequalities. These embeddings are often trained in separate vector space models defined according to different attributes of interest. In this paper, we develop a unified dynamic embedding model that learns attribute-specific word embeddings. We apply our model to investigate i) 20th century gender and ethnic occupation biases embedded in the Corpus of Historical American English (COHA), and ii) biases against refugees embedded in a novel corpus of talk radio transcripts containing 119 million words produced over one month across 83 stations and 64 cities. Our results shed preliminary light on scenarios when dynamic embedding models may be more suitable for representing linguistic biases than individual vector space models, and vice-versa.

Citations (17)

Summary

We haven't generated a summary for this paper yet.