Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Multi-LLM Debiasing Framework (2409.13884v1)

Published 20 Sep 2024 in cs.CL, cs.AI, cs.CY, and cs.LG

Abstract: LLMs are powerful tools with the potential to benefit society immensely, yet, they have demonstrated biases that perpetuate societal inequalities. Despite significant advancements in bias mitigation techniques using data augmentation, zero-shot prompting, and model fine-tuning, biases continuously persist, including subtle biases that may elude human detection. Recent research has shown a growing interest in multi-LLM approaches, which have been demonstrated to be effective in improving the quality of reasoning and factuality in LLMs. Building on this approach, we propose a novel multi-LLM debiasing framework aimed at reducing bias in LLMs. Our work is the first to introduce and evaluate two distinct approaches within this framework for debiasing LLMs: a centralized method, where the conversation is facilitated by a single central LLM, and a decentralized method, where all models communicate directly. Our findings reveal that our multi-LLM framework significantly reduces bias in LLMs, outperforming the baseline method across several social groups.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Deonna M. Owens (1 paper)
  2. Ryan A. Rossi (124 papers)
  3. Sungchul Kim (65 papers)
  4. Tong Yu (119 papers)
  5. Franck Dernoncourt (161 papers)
  6. Xiang Chen (343 papers)
  7. Ruiyi Zhang (98 papers)
  8. Jiuxiang Gu (73 papers)
  9. Hanieh Deilamsalehy (19 papers)
  10. Nedim Lipka (49 papers)