Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models (2305.19249v1)

Published 30 May 2023 in cs.CL and cs.LG

Abstract: Large pre-trained LLMs (PLMs) have demonstrated strong performance on natural language understanding (NLU) tasks through fine-tuning. However, fine-tuned models still suffer from overconfident predictions, especially in out-of-domain settings. In this paper, we tackle the problem of calibrating fine-tuned LLMs. We demonstrate that the PLMs are well-calibrated on the masked LLMing task with robust predictive confidence under domain shift, yet the fine-tuned models fail to retain such property due to catastrophic forgetting, which impacts the calibration on the downstream classification task. In light of these observations, we evaluate the calibration of several methods that preserve pre-trained features and show that preserving pre-trained features can improve the calibration of fine-tuned LLMs. Among these methods, our proposed method that encourages the fine-tuned model to learn generative representations with auxiliary LLMing objective achieves competitive accuracy and the lowest expected calibration error compared to several strong baselines under both in-domain and out-of-domain settings on three downstream NLU tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Guande He (13 papers)
  2. Jianfei Chen (63 papers)
  3. Jun Zhu (424 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.