Papers
Topics
Authors
Recent
2000 character limit reached

BiasEdit: Debiasing Stereotyped Language Models via Model Editing (2503.08588v1)

Published 11 Mar 2025 in cs.CL, cs.AI, cs.CY, and cs.LG

Abstract: Previous studies have established that LLMs manifest stereotyped biases. Existing debiasing strategies, such as retraining a model with counterfactual data, representation projection, and prompting often fail to efficiently eliminate bias or directly alter the models' biased internal representations. To address these issues, we propose BiasEdit, an efficient model editing method to remove stereotypical bias from LLMs through lightweight networks that act as editors to generate parameter updates. BiasEdit employs a debiasing loss guiding editor networks to conduct local edits on partial parameters of a LLM for debiasing while preserving the language modeling abilities during editing through a retention loss. Experiments on StereoSet and Crows-Pairs demonstrate the effectiveness, efficiency, and robustness of BiasEdit in eliminating bias compared to tangental debiasing baselines and little to no impact on the LLMs' general capabilities. In addition, we conduct bias tracing to probe bias in various modules and explore bias editing impacts on different components of LLMs.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.