Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias (2004.12265v2)

Published 26 Apr 2020 in cs.CL

Abstract: Common methods for interpreting neural models in natural language processing typically examine either their structure or their behavior, but not both. We propose a methodology grounded in the theory of causal mediation analysis for interpreting which parts of a model are causally implicated in its behavior. It enables us to analyze the mechanisms by which information flows from input to output through various model components, known as mediators. We apply this methodology to analyze gender bias in pre-trained Transformer LLMs. We study the role of individual neurons and attention heads in mediating gender bias across three datasets designed to gauge a model's sensitivity to gender bias. Our mediation analysis reveals that gender bias effects are (i) sparse, concentrated in a small part of the network; (ii) synergistic, amplified or repressed by different components; and (iii) decomposable into effects flowing directly from the input and indirectly through the mediators.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jesse Vig (18 papers)
  2. Sebastian Gehrmann (48 papers)
  3. Yonatan Belinkov (111 papers)
  4. Sharon Qian (4 papers)
  5. Daniel Nevo (19 papers)
  6. Simas Sakenis (1 paper)
  7. Jason Huang (6 papers)
  8. Yaron Singer (28 papers)
  9. Stuart Shieber (6 papers)
Citations (119)

Summary

We haven't generated a summary for this paper yet.