Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data (2010.11506v1)

Published 22 Oct 2020 in cs.CL, cs.AI, and cs.LG

Abstract: Fine-tuned pre-trained LLMs can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization. To mitigate this issue, we propose a regularized fine-tuning method. Our method introduces two types of regularization for better calibration: (1) On-manifold regularization, which generates pseudo on-manifold samples through interpolation within the data manifold. Augmented training with these pseudo samples imposes a smoothness regularization to improve in-distribution calibration. (2) Off-manifold regularization, which encourages the model to output uniform distributions for pseudo off-manifold samples to address the over-confidence issue for OOD data. Our experiments demonstrate that the proposed method outperforms existing calibration methods for text classification in terms of expectation calibration error, misclassification detection, and OOD detection on six datasets. Our code can be found at https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lingkai Kong (34 papers)
  2. Haoming Jiang (52 papers)
  3. Yuchen Zhuang (37 papers)
  4. Jie Lyu (5 papers)
  5. Tuo Zhao (131 papers)
  6. Chao Zhang (907 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.