Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 172 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Conformal Alignment: Knowing When to Trust Foundation Models with Guarantees (2405.10301v3)

Published 16 May 2024 in stat.ML, cs.AI, and cs.LG

Abstract: Before deploying outputs from foundation models in high-stakes tasks, it is imperative to ensure that they align with human values. For instance, in radiology report generation, reports generated by a vision-LLM must align with human evaluations before their use in medical decision-making. This paper presents Conformal Alignment, a general framework for identifying units whose outputs meet a user-specified alignment criterion. It is guaranteed that on average, a prescribed fraction of selected units indeed meet the alignment criterion, regardless of the foundation model or the data distribution. Given any pre-trained model and new units with model-generated outputs, Conformal Alignment leverages a set of reference data with ground-truth alignment status to train an alignment predictor. It then selects new units whose predicted alignment scores surpass a data-dependent threshold, certifying their corresponding outputs as trustworthy. Through applications to question answering and radiology report generation, we demonstrate that our method is able to accurately identify units with trustworthy outputs via lightweight training over a moderate amount of reference data. En route, we investigate the informativeness of various features in alignment prediction and combine them with standard models to construct the alignment predictor.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. Learn then test: Calibrating predictive algorithms to achieve risk control. arXiv preprint arXiv:2110.01052, 2021.
  2. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
  3. Fairness and machine learning: Limitations and opportunities. MIT Press, 2023.
  4. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological), 57(1):289–300, 1995.
  5. The ethics of artificial intelligence. In Artificial intelligence safety and security, pages 57–69. Chapman and Hall/CRC, 2018.
  6. Artificial intelligence, bias and clinical safety. BMJ quality & safety, 28(3):231–237, 2019.
  7. Adaptation with self-evaluation to improve selective prediction in llms. arXiv preprint arXiv:2310.11689, 2023.
  8. Chi-Keung Chow. An optimum character recognition system using decision functions. IRE Transactions on Electronic Computers, (4):247–254, 1957.
  9. Ran El-Yaniv et al. On the foundations of noise-free selective classification. Journal of Machine Learning Research, 11(5), 2010.
  10. Selective classification for deep neural networks. Advances in neural information processing systems, 30, 2017.
  11. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654, 2020.
  12. Uncertainty quantification over graph with conformalized graph neural networks. Advances in Neural Information Processing Systems, 36, 2024.
  13. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. arXiv preprint arXiv:2311.05232, 2023.
  14. Uncertainty in language models: Assessment through rank-calibration. arXiv preprint arXiv:2404.03163, 2024.
  15. Model-free selective inference under covariate shift via weighted conformal p-values. arXiv preprint arXiv:2307.09291, 2023.
  16. Selection by prediction with conformal p-values. Journal of Machine Learning Research, 24(244):1–41, 2023.
  17. Confidence on the focal: Conformal prediction with selection-conditional coverage. arXiv preprint arXiv:2403.03868, 2024.
  18. Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data, 6(1):317, 2019.
  19. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017.
  20. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022.
  21. Selective question answering under domain shift. arXiv preprint arXiv:2006.09462, 2020.
  22. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. arXiv preprint arXiv:2302.09664, 2023.
  23. Conformal prediction with large language models for multi-choice question answering. arXiv preprint arXiv:2305.18404, 2023.
  24. Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523):1094–1111, 2018.
  25. Traq: Trustworthy retrieval augmented question answering via conformal prediction. arXiv preprint arXiv:2307.04642, 2023.
  26. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81, 2004.
  27. Generating with confidence: Uncertainty quantification for black-box large language models. arXiv preprint arXiv:2305.19187, 2023.
  28. Trustworthy llms: a survey and guideline for evaluating large language models’ alignment. arXiv preprint arXiv:2308.05374, 2023.
  29. Language models with conformal factuality guarantees. arXiv preprint arXiv:2402.10978, 2024.
  30. Consistent estimators for learning to defer to an expert. In International Conference on Machine Learning, pages 7076–7087. PMLR, 2020.
  31. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  32. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730–27744, 2022.
  33. Conformal language modeling. arXiv preprint arXiv:2306.10193, 2023.
  34. Selective classification via neural network training dynamics. arXiv preprint arXiv:2205.13532, 2022.
  35. Direct preference optimization: Your language model is secretly a reward model. arxiv 2023. arXiv preprint arXiv:2305.18290, 2023.
  36. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266, 2019.
  37. Robots that ask for help: Uncertainty alignment for large language model planners. arXiv preprint arXiv:2307.01928, 2023.
  38. Conformalized quantile regression. Advances in neural information processing systems, 32, 2019.
  39. Unintended impacts of llm alignment on global representation. arXiv preprint arXiv:2402.15018, 2024.
  40. Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion. arXiv preprint arXiv:2203.13224, 2022.
  41. Chexbert: Combining automatic labelers and expert annotations for accurate radiology report labeling using bert. arxiv [cscl]. published online april 20, 2020, 2004.
  42. John D Storey. The positive false discovery rate: a bayesian interpretation and the q-value. The annals of statistics, 31(6):2013–2035, 2003.
  43. Api is enough: Conformal prediction for large language models without logit-access. arXiv preprint arXiv:2403.01216, 2024.
  44. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  45. Non-exchangeable conformal language generation with nearest neighbors. arXiv preprint arXiv:2402.00707, 2024.
  46. Investigating selective prediction approaches across several tasks in iid, ood, and adversarial settings. arXiv preprint arXiv:2203.00211, 2022.
  47. Algorithmic learning in a random world, volume 29. Springer, 2005.
  48. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021.
  49. Reliable visual question answering: Abstain rather than answer incorrectly. In European Conference on Computer Vision, pages 148–166. Springer, 2022.
  50. Uncertainty-aware language modeling for selective question answering. arXiv preprint arXiv:2311.15451, 2023.
  51. Benchmarking llms via uncertainty quantification. arXiv preprint arXiv:2401.12794, 2024.
  52. Selective-lama: Selective prediction for confidence-aware evaluation of language models. In Findings of the Association for Computational Linguistics: EACL 2023, pages 2017–2028, 2023.
  53. Opt: Open pre-trained transformer language models, 2022. URL https://arxiv. org/abs/2205.01068, 3:19–0, 2023.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 4 tweets and received 12 likes.

Upgrade to Pro to view all of the tweets about this paper: