Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Teaching Large Language Models to Express Knowledge Boundary from Their Own Signals (2406.10881v1)

Published 16 Jun 2024 in cs.CL

Abstract: LLMs have achieved great success, but their occasional content fabrication, or hallucination, limits their practical application. Hallucination arises because LLMs struggle to admit ignorance due to inadequate training on knowledge boundaries. We call it a limitation of LLMs that they can not accurately express their knowledge boundary, answering questions they know while admitting ignorance to questions they do not know. In this paper, we aim to teach LLMs to recognize and express their knowledge boundary, so they can reduce hallucinations caused by fabricating when they do not know. We propose CoKE, which first probes LLMs' knowledge boundary via internal confidence given a set of questions, and then leverages the probing results to elicit the expression of the knowledge boundary. Extensive experiments show CoKE helps LLMs express knowledge boundaries, answering known questions while declining unknown ones, significantly improving in-domain and out-of-domain performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Lida Chen (8 papers)
  2. Zujie Liang (13 papers)
  3. Xintao Wang (132 papers)
  4. Jiaqing Liang (62 papers)
  5. Yanghua Xiao (151 papers)
  6. Feng Wei (39 papers)
  7. Jinglei Chen (10 papers)
  8. Zhenghong Hao (2 papers)
  9. Bing Han (74 papers)
  10. Wei Wang (1793 papers)
Citations (5)