Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception (2403.14896v2)

Published 22 Mar 2024 in cs.CY

Abstract: The pervasive spread of misinformation and disinformation in social media underscores the critical importance of detecting media bias. While robust LLMs have emerged as foundational tools for bias prediction, concerns about inherent biases within these models persist. In this work, we investigate the presence and nature of bias within LLMs and its consequential impact on media bias detection. Departing from conventional approaches that focus solely on bias detection in media content, we delve into biases within the LLM systems themselves. Through meticulous examination, we probe whether LLMs exhibit biases, particularly in political bias prediction and text continuation tasks. Additionally, we explore bias across diverse topics, aiming to uncover nuanced variations in bias expression within the LLM framework. Importantly, we propose debiasing strategies, including prompt engineering and model fine-tuning. Extensive analysis of bias tendencies across different LLMs sheds light on the broader landscape of bias propagation in LLMs. This study advances our understanding of LLM bias, offering critical insights into its implications for bias detection tasks and paving the way for more robust and equitable AI systems

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Luyang Lin (5 papers)
  2. Lingzhi Wang (54 papers)
  3. Jinsong Guo (3 papers)
  4. Kam-Fai Wong (92 papers)
Citations (11)
X Twitter Logo Streamline Icon: https://streamlinehq.com