Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Calibration and Correctness of Language Models for Code (2402.02047v4)

Published 3 Feb 2024 in cs.SE and cs.LG

Abstract: Machine learning models are widely used, but can also often be wrong. Users would benefit from a reliable indication of whether a given output from a given model should be trusted, so a rational decision can be made whether to use the output or not. For example, outputs can be associated with a confidence measure; if this confidence measure is strongly associated with likelihood of correctness, then the model is said to be well-calibrated. A well-calibrated confidence measure can serve as a basis for rational, graduated decision-making on how much review and care is needed when using generated code. Calibration has so far been studied in mostly non-generative (e.g. classification) settings, especially in software engineering. However, generated code can quite often be wrong: Given generated code, developers must decide whether to use directly, use after varying intensity of careful review, or discard model-generated code. Thus, calibration is vital in generative settings. We make several contributions. We develop a framework for evaluating the calibration of code-generating models. We consider several tasks, correctness criteria, datasets, and approaches, and find that, by and large, generative code models we test are not well-calibrated out of the box. We then show how calibration can be improved using standard methods, such as Platt scaling. Since Platt scaling relies on the prior availability of correctness data, we evaluate the applicability and generalizability of Platt scaling in software engineering, discuss settings where it has good potential for practical use, and settings where it does not. Our contributions will lead to better-calibrated decision-making in the current use of code generated by LLMs, and offers a framework for future research to further improve calibration methods for generative models in software engineering.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Claudio Spiess (4 papers)
  2. David Gros (7 papers)
  3. Kunal Suresh Pai (2 papers)
  4. Michael Pradel (49 papers)
  5. Md Rafiqul Islam Rabin (25 papers)
  6. Susmit Jha (55 papers)
  7. Prem Devanbu (9 papers)
  8. Toufique Ahmed (26 papers)
  9. Amin Alipour (6 papers)
Citations (8)