Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

End-to-End Self-Debiasing Framework for Robust NLU Training (2109.02071v1)

Published 5 Sep 2021 in cs.CL

Abstract: Existing Natural Language Understanding (NLU) models have been shown to incorporate dataset biases leading to strong performance on in-distribution (ID) test sets but poor performance on out-of-distribution (OOD) ones. We introduce a simple yet effective debiasing framework whereby the shallow representations of the main model are used to derive a bias model and both models are trained simultaneously. We demonstrate on three well studied NLU tasks that despite its simplicity, our method leads to competitive OOD results. It significantly outperforms other debiasing approaches on two tasks, while still delivering high in-distribution performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Abbas Ghaddar (18 papers)
  2. Philippe Langlais (23 papers)
  3. Mehdi Rezagholizadeh (78 papers)
  4. Ahmad Rashid (24 papers)
Citations (33)