Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey of Confidence Estimation and Calibration in Large Language Models (2311.08298v2)

Published 14 Nov 2023 in cs.CL and cs.AI

Abstract: LLMs have demonstrated remarkable capabilities across a wide range of tasks in various domains. Despite their impressive performance, they can be unreliable due to factual errors in their generations. Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations. There has been a lot of recent research aiming to address this, but there has been no comprehensive overview to organize it and outline the main lessons learned. The present survey aims to bridge this gap. In particular, we outline the challenges and we summarize recent technical advancements for LLM confidence estimation and calibration. We further discuss their applications and suggest promising directions for future work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiahui Geng (24 papers)
  2. Fengyu Cai (12 papers)
  3. Yuxia Wang (41 papers)
  4. Heinz Koeppl (105 papers)
  5. Preslav Nakov (253 papers)
  6. Iryna Gurevych (264 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com