Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU Models (2103.06922v3)

Published 11 Mar 2021 in cs.CL and cs.LG

Abstract: Recent studies indicate that NLU models are prone to rely on shortcut features for prediction, without achieving true language understanding. As a result, these models fail to generalize to real-world out-of-distribution data. In this work, we show that the words in the NLU training set can be modeled as a long-tailed distribution. There are two findings: 1) NLU models have strong preference for features located at the head of the long-tailed distribution, and 2) Shortcut features are picked up during very early few iterations of the model training. These two observations are further employed to formulate a measurement which can quantify the shortcut degree of each training sample. Based on this shortcut measurement, we propose a shortcut mitigation framework LTGR, to suppress the model from making overconfident predictions for samples with large shortcut degree. Experimental results on three NLU benchmarks demonstrate that our long-tailed distribution explanation accurately reflects the shortcut learning behavior of NLU models. Experimental analysis further indicates that LTGR can improve the generalization accuracy on OOD data, while preserving the accuracy on in-distribution data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Mengnan Du (90 papers)
  2. Varun Manjunatha (23 papers)
  3. Rajiv Jain (20 papers)
  4. Ruchi Deshpande (1 paper)
  5. Franck Dernoncourt (161 papers)
  6. Jiuxiang Gu (73 papers)
  7. Tong Sun (49 papers)
  8. Xia Hu (186 papers)
Citations (92)

Summary

We haven't generated a summary for this paper yet.