Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Factual Confidence of LLMs: on Reliability and Robustness of Current Estimators (2406.13415v1)

Published 19 Jun 2024 in cs.CL and cs.LG

Abstract: LLMs tend to be unreliable in the factuality of their answers. To address this problem, NLP researchers have proposed a range of techniques to estimate LLM's confidence over facts. However, due to the lack of a systematic comparison, it is not clear how the different methods compare to one another. To fill this gap, we present a survey and empirical comparison of estimators of factual confidence. We define an experimental framework allowing for fair comparison, covering both fact-verification and question answering. Our experiments across a series of LLMs indicate that trained hidden-state probes provide the most reliable confidence estimates, albeit at the expense of requiring access to weights and training data. We also conduct a deeper assessment of factual confidence by measuring the consistency of model behavior under meaning-preserving variations in the input. We find that the confidence of LLMs is often unstable across semantically equivalent inputs, suggesting that there is much room for improvement of the stability of models' parametric knowledge. Our code is available at (https://github.com/amazon-science/factual-confidence-of-LLMs).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Matéo Mahaut (5 papers)
  2. Laura Aina (8 papers)
  3. Paula Czarnowska (7 papers)
  4. Momchil Hardalov (23 papers)
  5. Thomas Müller (83 papers)
  6. Lluís Màrquez (31 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com