Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases (2106.09231v1)

Published 17 Jun 2021 in cs.CL and cs.AI

Abstract: Previous literatures show that pre-trained masked LLMs (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source. In this paper, we conduct a rigorous study to explore the underlying predicting mechanisms of MLMs over different extraction paradigms. By investigating the behaviors of MLMs, we find that previous decent performance mainly owes to the biased prompts which overfit dataset artifacts. Furthermore, incorporating illustrative cases and external contexts improve knowledge prediction mainly due to entity type guidance and golden answer leakage. Our findings shed light on the underlying predicting mechanisms of MLMs, and strongly question the previous conclusion that current MLMs can potentially serve as reliable factual knowledge bases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Boxi Cao (21 papers)
  2. Hongyu Lin (94 papers)
  3. Xianpei Han (103 papers)
  4. Le Sun (111 papers)
  5. Lingyong Yan (29 papers)
  6. Meng Liao (7 papers)
  7. Tong Xue (2 papers)
  8. Jin Xu (131 papers)
Citations (119)

Summary

We haven't generated a summary for this paper yet.