Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph (2406.15627v2)
Abstract: Uncertainty quantification (UQ) is a critical component of ML applications. The rapid proliferation of LLMs has stimulated researchers to seek efficient and effective approaches to UQ for text generation. As with other ML models, LLMs are prone to making incorrect predictions, in the form of ``hallucinations'' whereby claims are fabricated or low-quality outputs are generated for a given input. UQ is a key element in dealing with these challenges. However, research to date on UQ methods for LLMs has been fragmented, in terms of the literature on UQ techniques and evaluation methods. In this work, we tackle this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel UQ techniques over various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across nine tasks, and identify the most promising approaches. Code: https://github.com/IINemo/lm-polygraph
- Roman Vashurin (6 papers)
- Ekaterina Fadeeva (7 papers)
- Artem Vazhentsev (8 papers)
- Akim Tsvigun (12 papers)
- Daniil Vasilev (2 papers)
- Rui Xing (16 papers)
- Abdelrahman Boda Sadallah (2 papers)
- Lyudmila Rvanova (3 papers)
- Sergey Petrakov (5 papers)
- Alexander Panchenko (92 papers)
- Timothy Baldwin (125 papers)
- Preslav Nakov (253 papers)
- Maxim Panov (48 papers)
- Artem Shelmanov (29 papers)
- Kirill Grishchenkov (1 paper)