Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Calibrating Long-form Generations from Large Language Models (2402.06544v2)

Published 9 Feb 2024 in cs.CL, cs.AI, and cs.LG

Abstract: To enhance LLMs' (LLMs) reliability, calibration is essential -- the model's assessed confidence scores should align with the actual likelihood of its responses being correct. However, current confidence elicitation methods and calibration metrics typically rely on a binary true/false assessment of response correctness. This approach does not apply to long-form generation, where an answer can be partially correct. Addressing this gap, we introduce a unified calibration framework, in which both the correctness of the LLMs' responses and their associated confidence levels are treated as distributions across a range of scores. Within this framework, we develop three metrics to precisely evaluate LLM calibration and further propose two confidence elicitation methods based on self-consistency and self-evaluation. Our experiments, which include long-form QA and summarization tasks, demonstrate that larger models don't necessarily guarantee better calibration, that calibration performance is found to be metric-dependent, and that self-consistency methods excel in factoid datasets. We also find that calibration can be enhanced through techniques such as fine-tuning, integrating relevant source documents, scaling the temperature, and combining self-consistency with self-evaluation. Lastly, we showcase a practical application of our system: selecting and cascading open-source models and ChatGPT to optimize correctness given a limited API budget. This research not only challenges existing notions of LLM calibration but also offers practical methodologies for improving trustworthiness in long-form generation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Qampari: An open-domain question answering benchmark for questions with many answers from multiple paragraphs.
  2. Fine-tuning language models to find agreement among humans with diverse preferences. ArXiv, abs/2211.15006.
  3. Area under the precision-recall curve: Point estimates and confidence intervals. In ECML/PKDD.
  4. Jiuhai Chen and Jonas Mueller. 2023. Quantifying uncertainty in answers from any language model and enhancing their trustworthiness.
  5. Frugalgpt: How to use large language models while reducing cost and improving performance. ArXiv, abs/2305.05176.
  6. Universal self-consistency for large language model generation. ArXiv, abs/2311.17311.
  7. On the relation between sensitivity and accuracy in in-context learning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 155–167, Singapore. Association for Computational Linguistics.
  8. A close look into the calibration of pre-trained language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1343–1367, Toronto, Canada. Association for Computational Linguistics.
  9. Cheng-Han Chiang and Hung yi Lee. 2023. Can large language models be an alternative to human evaluations? In Annual Meeting of the Association for Computational Linguistics.
  10. Selectively answering ambiguous questions. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 530–543, Singapore. Association for Computational Linguistics.
  11. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics.
  12. Enabling large language models to generate text with citations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6465–6488, Singapore. Association for Computational Linguistics.
  13. News summarization and evaluation in the era of gpt-3. arXiv preprint arXiv:2209.12356.
  14. How close is chatgpt to human experts? comparison corpus, evaluation, and detection. ArXiv, abs/2301.07597.
  15. On calibration of modern neural networks. In International Conference on Machine Learning.
  16. Deberta: Decoding-enhanced bert with disentangled attention. ArXiv, abs/2006.03654.
  17. Lora: Low-rank adaptation of large language models. ArXiv, abs/2106.09685.
  18. Uncertainty in natural language processing: Sources, quantification, and applications. ArXiv, abs/2306.04459.
  19. Survey of hallucination in natural language generation. ACM Computing Surveys, 55:1 – 38.
  20. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438.
  21. Language models (mostly) know what they know. ArXiv, abs/2207.05221.
  22. Selective question answering under domain shift. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5684–5696, Online. Association for Computational Linguistics.
  23. Which is better? exploring prompting strategy for llm-based metrics. ArXiv, abs/2311.03754.
  24. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. ArXiv, abs/2302.09664.
  25. Holistic evaluation of language models. Annals of the New York Academy of Sciences, 1525:140 – 146.
  26. Teaching models to express their uncertainty in words. Trans. Mach. Learn. Res., 2022.
  27. G-eval: NLG evaluation using gpt-4 with better human alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2511–2522, Singapore. Association for Computational Linguistics.
  28. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
  29. Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4140–4170, Toronto, Canada. Association for Computational Linguistics.
  30. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics.
  31. Reducing conversational agents’ overconfidence through linguistic calibration. Transactions of the Association for Computational Linguistics, 10:857–872.
  32. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12076–12100, Singapore. Association for Computational Linguistics.
  33. Revisiting the calibration of modern neural networks. ArXiv, abs/2106.07998.
  34. Obtaining well calibrated probabilities using bayesian binning. Proceedings of the … AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, 2015:2901–2907.
  35. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics.
  36. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
  37. ASQA: Factoid questions meet long-form answers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8273–8288, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  38. Head-to-tail: How knowledgeable are large language models (llm)? a.k.a. will llms replace knowledge graphs? ArXiv, abs/2308.10168.
  39. Fine-tuning language models for factuality. ArXiv, abs/2311.08401.
  40. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5433–5442, Singapore. Association for Computational Linguistics.
  41. Llama 2: Open foundation and fine-tuned chat models. ArXiv, abs/2307.09288.
  42. Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms. ArXiv, abs/2306.13063.
  43. Benchmarking large language models for news summarization. arXiv preprint arXiv:2301.13848.
  44. Siren’s song in the ai ocean: A survey on hallucination in large language models. ArXiv, abs/2309.01219.
  45. Judging llm-as-a-judge with mt-bench and chatbot arena. ArXiv, abs/2306.05685.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yukun Huang (39 papers)
  2. Yixin Liu (108 papers)
  3. Raghuveer Thirukovalluru (7 papers)
  4. Arman Cohan (121 papers)
  5. Bhuwan Dhingra (66 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com