Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks (2303.00293v1)

Published 1 Mar 2023 in cs.CL

Abstract: The GPT-3.5 models have demonstrated impressive performance in various NLP tasks, showcasing their strong understanding and reasoning capabilities. However, their robustness and abilities to handle various complexities of the open world have yet to be explored, which is especially crucial in assessing the stability of models and is a key aspect of trustworthy AI. In this study, we perform a comprehensive experimental analysis of GPT-3.5, exploring its robustness using 21 datasets (about 116K test samples) with 66 text transformations from TextFlint that cover 9 popular Natural Language Understanding (NLU) tasks. Our findings indicate that while GPT-3.5 outperforms existing fine-tuned models on some tasks, it still encounters significant robustness degradation, such as its average performance dropping by up to 35.74\% and 43.59\% in natural language inference and sentiment analysis tasks, respectively. We also show that GPT-3.5 faces some specific robustness challenges, including robustness instability, prompt sensitivity, and number sensitivity. These insights are valuable for understanding its limitations and guiding future research in addressing these challenges to enhance GPT-3.5's overall performance and generalization abilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Xuanting Chen (4 papers)
  2. Junjie Ye (66 papers)
  3. Can Zu (5 papers)
  4. Nuo Xu (37 papers)
  5. Rui Zheng (78 papers)
  6. Minlong Peng (18 papers)
  7. Jie Zhou (687 papers)
  8. Tao Gui (127 papers)
  9. Qi Zhang (784 papers)
  10. Xuanjing Huang (287 papers)
Citations (69)