Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CFL: Causally Fair Language Models Through Token-level Attribute Controlled Generation (2306.00374v1)

Published 1 Jun 2023 in cs.CL and cs.AI

Abstract: We propose a method to control the attributes of LLMs (LMs) for the text generation task using Causal Average Treatment Effect (ATE) scores and counterfactual augmentation. We explore this method, in the context of LM detoxification, and propose the Causally Fair Language (CFL) architecture for detoxifying pre-trained LMs in a plug-and-play manner. Our architecture is based on a Structural Causal Model (SCM) that is mathematically transparent and computationally efficient as compared with many existing detoxification techniques. We also propose several new metrics that aim to better understand the behaviour of LMs in the context of toxic text generation. Further, we achieve state of the art performance for toxic degeneration, which are computed using \RTP (RTP) benchmark. Our experiments show that CFL achieves such a detoxification without much impact on the model perplexity. We also show that CFL mitigates the unintended bias problem through experiments on the BOLD dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rahul Madhavan (12 papers)
  2. Rishabh Garg (4 papers)
  3. Kahini Wadhawan (11 papers)
  4. Sameep Mehta (27 papers)
Citations (3)