Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Restoring Calibration for Aligned Large Language Models: A Calibration-Aware Fine-Tuning Approach (2505.01997v2)

Published 4 May 2025 in cs.LG, cs.AI, and stat.ML

Abstract: One of the key technologies for the success of LLMs is preference alignment. However, a notable side effect of preference alignment is poor calibration: while the pre-trained models are typically well-calibrated, LLMs tend to become poorly calibrated after alignment with human preferences. In this paper, we investigate why preference alignment affects calibration and how to address this issue. For the first question, we observe that the preference collapse issue in alignment undesirably generalizes to the calibration scenario, causing LLMs to exhibit overconfidence and poor calibration. To address this, we demonstrate the importance of fine-tuning with domain-specific knowledge to alleviate the overconfidence issue. To further analyze whether this affects the model's performance, we categorize models into two regimes: calibratable and non-calibratable, defined by bounds of Expected Calibration Error (ECE). In the calibratable regime, we propose a calibration-aware fine-tuning approach to achieve proper calibration without compromising LLMs' performance. However, as models are further fine-tuned for better performance, they enter the non-calibratable regime. For this case, we develop an EM-algorithm-based ECE regularization for the fine-tuning loss to maintain low calibration error. Extensive experiments validate the effectiveness of the proposed methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jiancong Xiao (15 papers)
  2. Bojian Hou (18 papers)
  3. Zhanliang Wang (3 papers)
  4. Ruochen Jin (4 papers)
  5. Qi Long (47 papers)
  6. Weijie J. Su (70 papers)
  7. Li Shen (363 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets