Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters (2505.04393v1)

Published 7 May 2025 in cs.CL

Abstract: With the increasing prevalence of artificial intelligence, careful evaluation of inherent biases needs to be conducted to form the basis for alleviating the effects these predispositions can have on users. LLMs are predominantly used by many as a primary source of information for various topics. LLMs frequently make factual errors, fabricate data (hallucinations), or present biases, exposing users to misinformation and influencing opinions. Educating users on their risks is key to responsible use, as bias, unlike hallucinations, cannot be caught through data verification. We quantify the political bias of popular LLMs in the context of the recent vote of the German Bundestag using the score produced by the Wahl-O-Mat. This metric measures the alignment between an individual's political views and the positions of German political parties. We compare the models' alignment scores to identify factors influencing their political preferences. Doing so, we discover a bias toward left-leaning parties, most dominant in larger LLMs. Also, we find that the language we use to communicate with the models affects their political views. Additionally, we analyze the influence of a model's origin and release date and compare the results to the outcome of the recent vote of the Bundestag. Our results imply that LLMs are prone to exhibiting political bias. Large corporations with the necessary means to develop LLMs, thus, knowingly or unknowingly, have a responsibility to contain these biases, as they can influence each voter's decision-making process and inform public opinion in general and at scale.

Political Bias in LLMs with Increased Parameters

This research paper provides a detailed analysis of political bias inherent in LLMs, specifically examining how this bias varies with the number of parameters. The authors utilize the Wahl-O-Mat score, a metric that gauges alignment with German political parties, to evaluate biases in various LLMs in relation to recent German Bundestag elections. The paper reveals a discernible left-leaning inclination in LLMs and finds that larger models exhibit greater bias.

Summary of Findings

The paper's methodology involved assessing the political alignment of seven prominent LLMs by comparing their responses to Wahl-O-Mat statements. These models were of varying sizes, ranging from 7 billion to 70 billion parameters, and included both established models such as Llama 2 and Llama 3, as well as recent models like DeepSeek R1 and SimpleScaling S1.

The paper documents several notable findings:

  • Political Bias Increase with Model Size: Larger LLMs consistently showed more pronounced alignment towards left-leaning parties than their smaller counterparts. The paper measures political alignment using a computed θ\theta score, where a higher score indicates a greater deviation towards left parties.
  • Influence of Language and Release Date: Models translated to English demonstrated slightly more left-leaning bias compared to their German counterparts. Furthermore, newer model releases showed increased political bias.
  • Impact of Origin: Contrary to expectations that cultural factors might influence bias, the paper concludes that the origin of LLMs — whether developed in American, European, or Chinese contexts — does not significantly affect their political orientation.

Implications

The authors highlight the implications of their findings amidst the rising utilization of LLMs in information dissemination and decision-making. Given the ability of LLMs to subtly influence opinions through information framing, political bias poses potential risks, particularly in shaping public discourse and electoral decisions. The research underscores the responsibility of corporations developing LLMs to address these biases effectively.

Future Research and Developments

This paper sets a foundation for future studies focused on quantifying and mitigating biases within LLMs. Further research could explore model training data, representation gaps, and the impact of tokenizers on data skewing as potential sources of bias. Additionally, examining the actual usage and influence of LLMs on voter behavior during elections could provide insights into their societal implications.

As LLMs continue to improve and permeate various domains, understanding and controlling inherent biases will become increasingly critical to ensuring ethical AI use. Future advancements in AI could entail developing bias detection mechanisms, improving the transparency of model training processes, and fostering more inclusive datasets that represent diverse political spectrums.

Conclusion

The paper effectively contributes to the ongoing discourse on AI ethics by identifying and quantifying the political biases entrenched within LLMs, particularly with more complex models. As AI systems gain prominence, recognizing their susceptibility to bias is integral to shaping technology that responsibly serves public interest without undue influence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. David Exler (1 paper)
  2. Mark Schutera (5 papers)
  3. Markus Reischl (16 papers)
  4. Luca Rettenberger (3 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com