Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Characterizing LLM Abstention Behavior in Science QA with Context Perturbations (2404.12452v2)

Published 18 Apr 2024 in cs.CL

Abstract: The correct model response in the face of uncertainty is to abstain from answering a question so as not to mislead the user. In this work, we study the ability of LLMs to abstain from answering context-dependent science questions when provided insufficient or incorrect context. We probe model sensitivity in several settings: removing gold context, replacing gold context with irrelevant context, and providing additional context beyond what is given. In experiments on four QA datasets with six LLMs, we show that performance varies greatly across models, across the type of context provided, and also by question type; in particular, many LLMs seem unable to abstain from answering boolean questions using standard QA prompts. Our analysis also highlights the unexpected impact of abstention performance on QA task accuracy. Counter-intuitively, in some settings, replacing gold context with irrelevant context or adding irrelevant context to gold context can improve abstention performance in a way that results in improvements in task performance. Our results imply that changes are needed in QA dataset design and evaluation to more effectively assess the correctness and downstream impacts of model abstention.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Knowledge of knowledge: Exploring known-unknowns uncertainty with large language models. arXiv preprint arXiv:2305.13712.
  2. Akari Asai and Eunsol Choi. 2021. Challenges in information-seeking QA: Unanswerable questions and paragraph retrieval. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1492–1504, Online. Association for Computational Linguistics.
  3. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
  4. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
  5. A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011.
  6. Open domain multi-document summarization: A comprehensive study of model brittleness under retrieval. In EMNLP Findings.
  7. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328.
  8. PubMedQA: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577, Hong Kong, China. Association for Computational Linguistics.
  9. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916.
  10. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110.
  11. Ptau: Prompt tuning for attributing unanswerable questions. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1219–1229.
  12. Reframing instructional prompts to GPTk’s language. In Findings of the Association for Computational Linguistics: ACL 2022, pages 589–612, Dublin, Ireland. Association for Computational Linguistics.
  13. Cross-task generalization via natural language crowdsourcing instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3470–3487, Dublin, Ireland. Association for Computational Linguistics.
  14. Overview of bioasq 2021: The ninth bioasq challenge on large-scale biomedical semantic indexing and question answering. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 12th International Conference of the CLEF Association, CLEF 2021, Virtual Event, September 21–24, 2021, Proceedings 12, pages 239–263. Springer.
  15. Can generalist foundation models outcompete special-purpose tuning? case study in medicine. arXiv preprint arXiv:2311.16452.
  16. Visconde: Multi-document qa with gpt-3 and neural reranking. In European Conference on Information Retrieval, pages 534–543. Springer.
  17. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551.
  18. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics.
  19. Condaqa: A contrastive reading comprehension dataset for reasoning about negation. arXiv preprint arXiv:2211.00295.
  20. Large language models can be easily distracted by irrelevant context. arXiv preprint arXiv:2302.00093.
  21. The curious case of hallucinatory (un)answerability: Finding truths in the hidden states of over-confident large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3607–3625, Singapore. Association for Computational Linguistics.
  22. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085.
  23. Adversarial glue: A multi-task benchmark for robustness evaluation of language models. arXiv preprint arXiv:2111.02840.
  24. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
  25. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
  26. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
  27. WikiQA: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013–2018, Lisbon, Portugal. Association for Computational Linguistics.
  28. Do large language models know what they don’t know? arXiv preprint arXiv:2305.18153.
  29. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625.
  30. Context-faithful prompting for large language models. arXiv preprint arXiv:2303.11315.
  31. Learning to ask unanswerable questions for machine reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4238–4248, Florence, Italy. Association for Computational Linguistics.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Bingbing Wen (11 papers)
  2. Bill Howe (39 papers)
  3. Lucy Lu Wang (41 papers)
Citations (5)

Summary

  • The paper shows that context perturbations significantly influence LLM abstention behavior, affecting science QA reliability.
  • It employs context removal, replacement, and augmentation across four datasets to assess performance differences among LLMs.
  • Results indicate that instruction-tuned models better manage noisy contexts, underscoring the importance of refined prompt and dataset design.

Characterizing LLM Abstention Behavior in Science QA with Context Perturbations

The research presented in the paper offers a critical examination of LLMs and their ability to abstain from answering science-based questions in contexts where the given data is incomplete or erroneous. This paper contributes to the understanding of context sensitivity in LLMs, focusing particularly on scenarios where abstention is the most appropriate response to avoid misleading outcomes.

The authors explore how different context perturbation methods affect LLM performance on scientific question-answering (QA) tasks. They employ techniques such as removing, replacing, and augmenting the provided context, evaluating the results across four datasets—SQuAD2, PubmedQA, BioASQ, and QASPER—with four LLMs, namely LLama2, Vicuna, Flan-T5, and GPT3.5. This approach reveals the significant variability in abstention behavior across LLMs, different question types, and context perturbations.

Key empirical findings of this paper indicate that task and abstention performance are not linearly independent. For instance, adding irrelevant context may lead to improved abstention performance, which could paradoxically enhance task performance. This counterintuitive observation highlights the complexity of interactions between abstention mechanisms and QA accuracy, suggesting that noisy contexts may serve as unintended cues that facilitate better question filtering.

The analysis highlights the inconsistency in LLMs' responses to boolean questions, which tend to predispose LLMs towards offering definitive answers, even where abstention would be more appropriate. Moreover, instruction-tuned LLMs display superior performance in navigating context perturbations, indicating that fine-tuning and instruction-following capabilities remain pivotal in effective abstention behaviors.

From a theoretical standpoint, the paper showcases the limitations inherent in current QA datasets, specifically in their ability to accurately capture abstention phenomenon. Adjustments in QA dataset design and evaluation methodologies are recommended to provide a more refined analysis of model abstention.

Looking ahead, the implications of this work are multifaceted. Practically, the results point to a need for enhancing LLM robustness against context perturbations possibly through improved prompt engineering strategies or specialized architectural modifications. Theoretically, the paper elucidates that abstention capability is an emergent property that may be heavily influenced by a model's underlying training dynamics and preconditioning—a factor that warrants further exploration.

Given the current trajectory of LLM advancement, future work might benefit from examining additional influences on abstention performance such as domain-specific pretraining, in-context learning, and other context manipulation strategies. The paper serves as a valuable foundation in investigating the fine-grained behaviors of LLMs in relation to context-bound uncertainty, a topic of increasing relevance in both applied and educational AI domains.