Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Matters in Memorizing and Recalling Facts? Multifaceted Benchmarks for Knowledge Probing in Language Models (2406.12277v3)

Published 18 Jun 2024 in cs.CL

Abstract: LLMs often struggle with handling factual knowledge, exhibiting factual hallucination issue. This makes it vital to evaluate the models' ability to recall its parametric knowledge about facts. In this study, we introduce a knowledge probing benchmark, BELIEF(ICL), to evaluate the knowledge recall ability of both encoder- and decoder-based pre-trained LLMs (PLMs) from diverse perspectives. BELIEFs utilize a multi-prompt dataset to evaluate PLM's accuracy, consistency, and reliability in factual knowledge recall. To enable a more reliable evaluation with BELIEFs, we semi-automatically create MyriadLAMA, which has massively diverse prompts. We validate the effectiveness of BELIEFs in comprehensively evaluating PLM's knowledge recall ability on diverse PLMs, including recent LLMs. We then investigate key factors in memorizing and recalling facts in PLMs, such as model size, pretraining strategy and corpora, instruction-tuning process and in-context learning settings. Finally, we reveal the limitation of the prompt-based knowledge probing. The MyriadLAMA is publicized.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Xin Zhao (160 papers)
  2. Naoki Yoshinaga (17 papers)
  3. Daisuke Oba (5 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets