Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Queen of England is not England's Queen: On the Lack of Factual Coherency in PLMs (2402.01453v1)

Published 2 Feb 2024 in cs.CL

Abstract: Factual knowledge encoded in Pre-trained LLMs (PLMs) enriches their representations and justifies their use as knowledge bases. Previous work has focused on probing PLMs for factual knowledge by measuring how often they can correctly predict an object entity given a subject and a relation, and improving fact retrieval by optimizing the prompts used for querying PLMs. In this work, we consider a complementary aspect, namely the coherency of factual knowledge in PLMs, i.e., how often can PLMs predict the subject entity given its initial prediction of the object entity. This goes beyond evaluating how much PLMs know, and focuses on the internal state of knowledge inside them. Our results indicate that PLMs have low coherency using manually written, optimized and paraphrased prompts, but including an evidence paragraph leads to substantial improvement. This shows that PLMs fail to model inverse relations and need further enhancements to be able to handle retrieving facts from their parameters in a coherent manner, and to be considered as knowledge bases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Paul Youssef (13 papers)
  2. Jörg Schlötterer (35 papers)
  3. Christin Seifert (46 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets