Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Debiasing Methods in Natural Language Understanding Make Bias More Accessible (2109.04095v1)

Published 9 Sep 2021 in cs.CL

Abstract: Model robustness to bias is often determined by the generalization on carefully designed out-of-distribution datasets. Recent debiasing methods in natural language understanding (NLU) improve performance on such datasets by pressuring models into making unbiased predictions. An underlying assumption behind such methods is that this also leads to the discovery of more robust features in the model's inner representations. We propose a general probing-based framework that allows for post-hoc interpretation of biases in LLMs, and use an information-theoretic approach to measure the extractability of certain biases from the model's representations. We experiment with several NLU datasets and known biases, and show that, counter-intuitively, the more a LLM is pushed towards a debiased regime, the more bias is actually encoded in its inner representations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Michael Mendelson (3 papers)
  2. Yonatan Belinkov (111 papers)
Citations (18)