Full-ECE: A Metric For Token-level Calibration on Large Language Models (2406.11345v1)
Abstract: Deep Neural Networks (DNNs) excel in various domains but face challenges in providing accurate uncertainty estimates, which are crucial for high-stakes applications. LLMs have recently emerged as powerful tools, demonstrating exceptional performance in language tasks. However, traditional calibration metrics such as Expected Calibration Error (ECE) and classwise-ECE (cw-ECE) are inadequate for LLMs due to their vast vocabularies, data complexity, and distributional focus. To address this, we propose a novel calibration concept called full calibration and introduce its corresponding metric, Full-ECE. Full-ECE evaluates the entire predicted probability distribution, offering a more accurate and robust measure of calibration for LLMs.
- Han Liu (340 papers)
- Yupeng Zhang (25 papers)
- Bingning Wang (29 papers)
- Weipeng Chen (56 papers)
- Xiaolin Hu (97 papers)