Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis (2203.16747v1)

Published 31 Mar 2022 in cs.CL

Abstract: Recently, there has been a trend to investigate the factual knowledge captured by Pre-trained LLMs (PLMs). Many works show the PLMs' ability to fill in the missing factual words in cloze-style prompts such as "Dante was born in [MASK]." However, it is still a mystery how PLMs generate the results correctly: relying on effective clues or shortcut patterns? We try to answer this question by a causal-inspired analysis that quantitatively measures and evaluates the word-level patterns that PLMs depend on to generate the missing words. We check the words that have three typical associations with the missing words: knowledge-dependent, positionally close, and highly co-occurred. Our analysis shows: (1) PLMs generate the missing factual words more by the positionally close and highly co-occurred words than the knowledge-dependent words; (2) the dependence on the knowledge-dependent words is more effective than the positionally close and highly co-occurred words. Accordingly, we conclude that the PLMs capture the factual knowledge ineffectively because of depending on the inadequate associations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Shaobo Li (24 papers)
  2. Xiaoguang Li (71 papers)
  3. Lifeng Shang (90 papers)
  4. Zhenhua Dong (76 papers)
  5. Chengjie Sun (9 papers)
  6. Bingquan Liu (9 papers)
  7. Zhenzhou Ji (6 papers)
  8. Xin Jiang (242 papers)
  9. Qun Liu (230 papers)
Citations (43)