Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UBENCH: Benchmarking Uncertainty in Large Language Models with Multiple Choice Questions (2406.12784v1)

Published 18 Jun 2024 in cs.CL

Abstract: The rapid development of LLMs has shown promising practical results. However, their low interpretability often leads to errors in unforeseen circumstances, limiting their utility. Many works have focused on creating comprehensive evaluation systems, but previous benchmarks have primarily assessed problem-solving abilities while neglecting the response's uncertainty, which may result in unreliability. Recent methods for measuring LLM reliability are resource-intensive and unable to test black-box models. To address this, we propose UBENCH, a comprehensive benchmark for evaluating LLM reliability. UBENCH includes 3,978 multiple-choice questions covering knowledge, language, understanding, and reasoning abilities. Experimental results show that UBENCH has achieved state-of-the-art performance, while its single-sampling method significantly saves computational resources compared to baseline methods that require multiple samplings. Additionally, based on UBENCH, we evaluate the reliability of 15 popular LLMs, finding GLM4 to be the most outstanding, closely followed by GPT-4. We also explore the impact of Chain-of-Thought prompts, role-playing prompts, option order, and temperature on LLM reliability, analyzing the varying effects on different LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Xunzhi Wang (6 papers)
  2. Zhuowei Zhang (2 papers)
  3. Qiongyu Li (2 papers)
  4. Gaonan Chen (1 paper)
  5. Mengting Hu (20 papers)
  6. Zhiyu Li (69 papers)
  7. Bitong Luo (1 paper)
  8. Hang Gao (61 papers)
  9. Zhixin Han (2 papers)
  10. Haotian Wang (60 papers)
Citations (1)