Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

All Should Be Equal in the Eyes of Language Models: Counterfactually Aware Fair Text Generation (2311.05451v1)

Published 9 Nov 2023 in cs.CL, cs.CY, and cs.LG

Abstract: Fairness in LLMs (LMs) remains a longstanding challenge, given the inherent biases in training data that can be perpetuated by models and affect the downstream tasks. Recent methods employ expensive retraining or attempt debiasing during inference by constraining model outputs to contrast from a reference set of biased templates or exemplars. Regardless, they dont address the primary goal of fairness to maintain equitability across different demographic groups. In this work, we posit that inferencing LMs to generate unbiased output for one demographic under a context ensues from being aware of outputs for other demographics under the same context. To this end, we propose Counterfactually Aware Fair InferencE (CAFIE), a framework that dynamically compares the model understanding of diverse demographics to generate more equitable sentences. We conduct an extensive empirical evaluation using base LMs of varying sizes and across three diverse datasets and found that CAFIE outperforms strong baselines. CAFIE produces fairer text and strikes the best balance between fairness and LLMing capability

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Pragyan Banerjee (2 papers)
  2. Abhinav Java (11 papers)
  3. Surgan Jandial (14 papers)
  4. Simra Shahid (11 papers)
  5. Shaz Furniturewala (7 papers)
  6. Balaji Krishnamurthy (68 papers)
  7. Sumit Bhatia (30 papers)
Citations (3)