Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Seeds of Stereotypes: A Large-Scale Textual Analysis of Race and Gender Associations with Diseases in Online Sources (2405.05049v1)

Published 8 May 2024 in cs.CL

Abstract: Background Advancements in LLMs hold transformative potential in healthcare, however, recent work has raised concern about the tendency of these models to produce outputs that display racial or gender biases. Although training data is a likely source of such biases, exploration of disease and demographic associations in text data at scale has been limited. Methods We conducted a large-scale textual analysis using a dataset comprising diverse web sources, including Arxiv, Wikipedia, and Common Crawl. The study analyzed the context in which various diseases are discussed alongside markers of race and gender. Given that LLMs are pre-trained on similar datasets, this approach allowed us to examine the potential biases that LLMs may learn and internalize. We compared these findings with actual demographic disease prevalence as well as GPT-4 outputs in order to evaluate the extent of bias representation. Results Our findings indicate that demographic terms are disproportionately associated with specific disease concepts in online texts. gender terms are prominently associated with disease concepts, while racial terms are much less frequently associated. We find widespread disparities in the associations of specific racial and gender terms with the 18 diseases analyzed. Most prominently, we see an overall significant overrepresentation of Black race mentions in comparison to population proportions. Conclusions Our results highlight the need for critical examination and transparent reporting of biases in LLM pretraining datasets. Our study suggests the need to develop mitigation strategies to counteract the influence of biased training data in LLMs, particularly in sensitive domains such as healthcare.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Lasse Hyldig Hansen (2 papers)
  2. Nikolaj Andersen (1 paper)
  3. Jack Gallifant (17 papers)
  4. Liam G. McCoy (3 papers)
  5. James K Stone (1 paper)
  6. Nura Izath (1 paper)
  7. Marcela Aguirre-Jerez (1 paper)
  8. Judy Gichoya (13 papers)
  9. Leo Anthony Celi (49 papers)
  10. Danielle S Bitterman (3 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com