How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis (2203.16747v1)
Abstract: Recently, there has been a trend to investigate the factual knowledge captured by Pre-trained LLMs (PLMs). Many works show the PLMs' ability to fill in the missing factual words in cloze-style prompts such as "Dante was born in [MASK]." However, it is still a mystery how PLMs generate the results correctly: relying on effective clues or shortcut patterns? We try to answer this question by a causal-inspired analysis that quantitatively measures and evaluates the word-level patterns that PLMs depend on to generate the missing words. We check the words that have three typical associations with the missing words: knowledge-dependent, positionally close, and highly co-occurred. Our analysis shows: (1) PLMs generate the missing factual words more by the positionally close and highly co-occurred words than the knowledge-dependent words; (2) the dependence on the knowledge-dependent words is more effective than the positionally close and highly co-occurred words. Accordingly, we conclude that the PLMs capture the factual knowledge ineffectively because of depending on the inadequate associations.
- Shaobo Li (24 papers)
- Xiaoguang Li (71 papers)
- Lifeng Shang (90 papers)
- Zhenhua Dong (76 papers)
- Chengjie Sun (9 papers)
- Bingquan Liu (9 papers)
- Zhenzhou Ji (6 papers)
- Xin Jiang (242 papers)
- Qun Liu (230 papers)