Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FineDeb: A Debiasing Framework for Language Models (2302.02453v1)

Published 5 Feb 2023 in cs.CL and cs.CY

Abstract: As LLMs are increasingly included in human-facing machine learning tools, bias against demographic subgroups has gained attention. We propose FineDeb, a two-phase debiasing framework for LLMs that starts with contextual debiasing of embeddings learned by pretrained LLMs. The model is then fine-tuned on a LLMing objective. Our results show that FineDeb offers stronger debiasing in comparison to other methods which often result in models as biased as the original LLM. Our framework is generalizable for demographics with multiple classes, and we demonstrate its effectiveness through extensive experiments and comparisons with state of the art techniques. We release our code and data on GitHub.

Citations (4)

Summary

We haven't generated a summary for this paper yet.