Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UniBias: Unveiling and Mitigating LLM Bias through Internal Attention and FFN Manipulation (2405.20612v2)

Published 31 May 2024 in cs.CL and cs.AI

Abstract: LLMs have demonstrated impressive capabilities in various tasks using the in-context learning (ICL) paradigm. However, their effectiveness is often compromised by inherent bias, leading to prompt brittleness, i.e., sensitivity to design settings such as example selection, order, and prompt formatting. Previous studies have addressed LLM bias through external adjustment of model outputs, but the internal mechanisms that lead to such bias remain unexplored. Our work delves into these mechanisms, particularly investigating how feedforward neural networks (FFNs) and attention heads result in the bias of LLMs. By Interpreting the contribution of individual FFN vectors and attention heads, we identify the biased LLM components that skew LLMs' prediction toward specific labels. To mitigate these biases, we introduce UniBias, an inference-only method that effectively identifies and eliminates biased FFN vectors and attention heads. Extensive experiments across 12 NLP datasets demonstrate that UniBias significantly enhances ICL performance and alleviates prompt brittleness of LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hanzhang Zhou (6 papers)
  2. Zijian Feng (12 papers)
  3. Zixiao Zhu (8 papers)
  4. Junlang Qian (4 papers)
  5. Kezhi Mao (24 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.