Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Calibration Gap between Model and Human Confidence in Large Language Models (2401.13835v1)

Published 24 Jan 2024 in cs.LG, cs.AI, cs.CL, and cs.HC

Abstract: For LLMs to be trusted by humans they need to be well-calibrated in the sense that they can accurately assess and communicate how likely it is that their predictions are correct. Recent work has focused on the quality of internal LLM confidence assessments, but the question remains of how well LLMs can communicate this internal model confidence to human users. This paper explores the disparity between external human confidence in an LLM's responses and the internal confidence of the model. Through experiments involving multiple-choice questions, we systematically examine human users' ability to discern the reliability of LLM outputs. Our study focuses on two key areas: (1) assessing users' perception of true LLM confidence and (2) investigating the impact of tailored explanations on this perception. The research highlights that default explanations from LLMs often lead to user overestimation of both the model's confidence and its' accuracy. By modifying the explanations to more accurately reflect the LLM's internal confidence, we observe a significant shift in user perception, aligning it more closely with the model's actual confidence levels. This adjustment in explanatory approach demonstrates potential for enhancing user trust and accuracy in assessing LLM outputs. The findings underscore the importance of transparent communication of confidence levels in LLMs, particularly in high-stakes applications where understanding the reliability of AI-generated information is essential.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Mark Steyvers (18 papers)
  2. Heliodoro Tejeda (1 paper)
  3. Aakriti Kumar (4 papers)
  4. Catarina Belem (7 papers)
  5. Sheer Karny (2 papers)
  6. Xinyue Hu (27 papers)
  7. Lukas Mayer (4 papers)
  8. Padhraic Smyth (52 papers)
Citations (7)