Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Study on the Calibration of In-context Learning (2312.04021v4)

Published 7 Dec 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Accurate uncertainty quantification is crucial for the safe deployment of machine learning models, and prior research has demonstrated improvements in the calibration of modern LLMs (LMs). We study in-context learning (ICL), a prevalent method for adapting static LMs through tailored prompts, and examine the balance between performance and calibration across a broad spectrum of natural language understanding and reasoning tasks. Through comprehensive experiments, we observe that, with an increasing number of ICL examples, models initially exhibit increased miscalibration before achieving better calibration and miscalibration tends to arise in low-shot settings. Moreover, we find that methods aimed at improving usability, such as fine-tuning and chain-of-thought (CoT) prompting, can lead to miscalibration and unreliable natural language explanations. Furthermore, we explore recalibration techniques and find that a scaling-binning calibrator can reduce calibration errors consistently.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hanlin Zhang (30 papers)
  2. Yi-Fan Zhang (32 papers)
  3. Yaodong Yu (39 papers)
  4. Dhruv Madeka (16 papers)
  5. Dean Foster (28 papers)
  6. Eric Xing (127 papers)
  7. Sham Kakade (84 papers)
  8. Himabindu Lakkaraju (88 papers)
Citations (10)