GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective (2211.08073v4)
Abstract: Pre-trained LLMs (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase. However, the out-of-distribution (OOD) generalization problem remains a challenge in many NLP tasks, limiting the real-world deployment of these methods. This paper presents the first attempt at creating a unified benchmark named GLUE-X for evaluating OOD robustness in NLP models, highlighting the importance of OOD robustness and providing insights on how to measure the robustness of a model and how to improve it. The benchmark includes 13 publicly available datasets for OOD testing, and evaluations are conducted on 8 classic NLP tasks over 21 popularly used PLMs, including GPT-3 and GPT-3.5. Our findings confirm the need for improved OOD accuracy in NLP tasks, as significant performance degradation was observed in all settings compared to in-distribution (ID) accuracy.
- Linyi Yang (52 papers)
- Shuibai Zhang (4 papers)
- Libo Qin (77 papers)
- Yafu Li (26 papers)
- Yidong Wang (43 papers)
- Hanmeng Liu (11 papers)
- Jindong Wang (150 papers)
- Xing Xie (220 papers)
- Yue Zhang (620 papers)