Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Whose Side Are You On? Investigating the Political Stance of Large Language Models (2403.13840v1)

Published 15 Mar 2024 in cs.CL, cs.AI, and cs.SI

Abstract: LLMs have gained significant popularity for their application in various everyday tasks such as text generation, summarization, and information retrieval. As the widespread adoption of LLMs continues to surge, it becomes increasingly crucial to ensure that these models yield responses that are politically impartial, with the aim of preventing information bubbles, upholding fairness in representation, and mitigating confirmation bias. In this paper, we propose a quantitative framework and pipeline designed to systematically investigate the political orientation of LLMs. Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues. Across topics, the results indicate that LLMs exhibit a tendency to provide responses that closely align with liberal or left-leaning perspectives rather than conservative or right-leaning ones when user queries include details pertaining to occupation, race, or political affiliation. The findings presented in this study not only reaffirm earlier observations regarding the left-leaning characteristics of LLMs but also surface particular attributes, such as occupation, that are particularly susceptible to such inclinations even when directly steered towards conservatism. As a recommendation to avoid these models providing politicised responses, users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Pagnarasmey Pit (1 paper)
  2. Xingjun Ma (114 papers)
  3. Mike Conway (8 papers)
  4. Qingyu Chen (57 papers)
  5. James Bailey (70 papers)
  6. Henry Pit (1 paper)
  7. Putrasmey Keo (1 paper)
  8. Watey Diep (1 paper)
  9. Yu-Gang Jiang (223 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com