Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Boundary of Large Language Models: A Survey (2412.12472v2)

Published 17 Dec 2024 in cs.CL

Abstract: Although LLMs store vast amount of knowledge in their parameters, they still have limitations in the memorization and utilization of certain knowledge, leading to undesired behaviors such as generating untruthful and inaccurate responses. This highlights the critical need to understand the knowledge boundary of LLMs, a concept that remains inadequately defined in existing research. In this survey, we propose a comprehensive definition of the LLM knowledge boundary and introduce a formalized taxonomy categorizing knowledge into four distinct types. Using this foundation, we systematically review the field through three key lenses: the motivation for studying LLM knowledge boundaries, methods for identifying these boundaries, and strategies for mitigating the challenges they present. Finally, we discuss open challenges and potential research directions in this area. We aim for this survey to offer the community a comprehensive overview, facilitate access to key issues, and inspire further advancements in LLM knowledge research.

Summary

  • The paper comprehensively surveys the knowledge boundaries of Large Language Models, defining the concept and proposing a formal four-type taxonomy for categorizing knowledge.
  • It details methods for identifying these boundaries using techniques like uncertainty estimation, confidence calibration, and internal state probing.
  • The study reviews strategies to mitigate issues arising from knowledge boundaries, such as prompt optimization, external knowledge retrieval, and mechanisms for model refusal.

The paper addresses the limitations of LLMs in knowledge memorization and utilization, which leads to untruthful or inaccurate responses. It proposes a comprehensive definition of the LLM knowledge boundary and a formalized taxonomy that categorizes knowledge into four distinct types. The paper systematically reviews the motivation for studying LLM knowledge boundaries, methods for identifying these boundaries, and strategies for mitigating the challenges that they present.

The paper defines three types of knowledge boundaries:

  • Outward Knowledge Boundary: the observable knowledge boundary for a specific LLM, verifiable through a limited subset of expressions.
  • Parametric Knowledge Boundary: the abstract knowledge boundary, where knowledge is embedded within the LLM parameters and verifiable by at least one expression.
  • Universal Knowledge Boundary: the whole set of knowledge known to humans, verifiable by input-output pairs.

Based on these boundaries, the paper establishes a formal four-type knowledge taxonomy:

  • Prompt-Agnostic Known Knowledge (PAK): Knowledge verifiable by all expressions, regardless of the prompt, where KPAK={kK(qki,aki)Q^k,Pθ(akiqki)>ϵ}K_{PAK} = \{k\in\mathcal{K}|\forall (q_k^i, a_k^i)\in \hat{Q}_k, P_{\theta}(a_k^i|q_k^i)>\epsilon\}.
    • kk: a piece of knowledge
    • K\mathcal{K}: the whole set of abstracted knowledge that is known to human
    • qkiq_k^i: input
    • akia_k^i: output
    • Q^k\hat{Q}_k: a limited available subset of expressions
    • PθP_{\theta}: probability
    • ϵ\epsilon: a threshold
  • Prompt-Sensitive Known Knowledge (PSK): Knowledge residing within the LLM's parameters but sensitive to the prompt, where KPSK={kK((qki,aki)Qk,Pθ(akiqki)>ϵ)((qki,aki)Q^k,Pθ(akiqki)<ϵ)}K_{PSK} = \{k\in\mathcal{K}| (\exists (q_k^i, a_k^i)\in Q_k, P_{\theta}(a_k^i|q_k^i)>\epsilon) \wedge (\exists (q_k^i, a_k^i)\in \hat{Q}_k, P_{\theta}(a_k^i|q_k^i)<\epsilon)\}.
  • Model-Specific Unknown Knowledge (MSU): Knowledge not possessed in the specific LLM parameters but known to humans, where KMSU={kK(qki,aki)Qk,Pθ(akiqki)<ϵ}K_{MSU} = \{k\in\mathcal{K}| \forall (q_k^i, a_k^i)\in Q_k, P_{\theta}(a_k^i|q_k^i)<\epsilon\}.
  • Model-Agnostic Unknown Knowledge (MAU): Knowledge unknown to both the model and humans, where KMAU={kKQk=Ø}K_{MAU} = \{k\in\mathcal{K}| Q_k = \text{\O}\}.

The authors discuss undesirable behaviors of LLMs that stem from unawareness of knowledge boundaries, such as factuality hallucinations, untruthful responses misled by context, and truthful but undesired responses. Factuality hallucinations arise from deficiencies in domain-specific knowledge, outdated information, and overconfidence in addressing unknowns. Moreover, LLMs often produce untruthful responses when misled by untruthful or irrelevant context. Ambiguous knowledge may lead to random responses, while controversial knowledge can result in biased outputs.

The survey categorizes solutions for identifying knowledge boundaries into uncertainty estimation (UE), confidence calibration, and internal state probing. UE quantifies the uncertainty of a model's predictions, decomposing it into epistemic and aleatoric uncertainty. Epistemic uncertainty quantifies the lack of model knowledge, while aleatoric uncertainty refers to data-level uncertainty. Confidence calibration aligns the estimated LLM confidence with actual correctness using prompt-based and fine-tuning approaches. Internal state probing assesses factual accuracy using linear probing on internal states, including attention heads, hidden layer activations, neurons, and tokens.

Strategies to mitigate issues caused by knowledge boundaries include prompt optimization, prompt-based reasoning, self-refinement, and factuality decoding for PSK. For MSU, mitigation involves external knowledge retrieval, parametric knowledge editing, and knowledge-enhanced fine-tuning. To address MAU, strategies include refusal and asking clarification questions.

The authors also highlight several challenges and prospects: the need for more comprehensive benchmarks to assess knowledge boundaries, the generalization of knowledge boundary identification across domains, the utilization of LLM knowledge boundaries in future developments, and the mitigation of unintended side effects such as over-refusal and unnecessary costs.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets