Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Large Language Models Really Robust to Word-Level Perturbations? (2309.11166v2)

Published 20 Sep 2023 in cs.CL and cs.AI

Abstract: The swift advancement in the scales and capabilities of LLMs positions them as promising tools for a variety of downstream tasks. In addition to the pursuit of better performance and the avoidance of violent feedback on a certain prompt, to ensure the responsibility of the LLM, much attention is drawn to the robustness of LLMs. However, existing evaluation methods mostly rely on traditional question answering datasets with predefined supervised labels, which do not align with the superior generation capabilities of contemporary LLMs. To address this issue, we propose a novel rational evaluation approach that leverages pre-trained reward models as diagnostic tools to evaluate the longer conversation generated from more challenging open questions by LLMs, which we refer to as the Reward Model for Reasonable Robustness Evaluation (TREvaL). Longer conversations manifest the comprehensive grasp of LLMs in terms of their proficiency in understanding questions, a capability not entirely encompassed by individual words or letters, which may exhibit oversimplification and inherent biases. Our extensive empirical experiments demonstrate that TREvaL provides an innovative method for evaluating the robustness of an LLM. Furthermore, our results demonstrate that LLMs frequently exhibit vulnerability to word-level perturbations that are commonplace in daily language usage. Notably, we are surprised to discover that robustness tends to decrease as fine-tuning (SFT and RLHF) is conducted. The code of TREval is available in https://github.com/Harry-mic/TREvaL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Haoyu Wang (309 papers)
  2. Guozheng Ma (12 papers)
  3. Cong Yu (81 papers)
  4. Ning Gui (16 papers)
  5. Linrui Zhang (12 papers)
  6. Zhiqi Huang (78 papers)
  7. Suwei Ma (2 papers)
  8. Yongzhe Chang (12 papers)
  9. Sen Zhang (86 papers)
  10. Li Shen (363 papers)
  11. Xueqian Wang (99 papers)
  12. Peilin Zhao (127 papers)
  13. Dacheng Tao (829 papers)
Citations (19)
Github Logo Streamline Icon: https://streamlinehq.com