Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Models are Open Knowledge Graphs (2010.11967v1)

Published 22 Oct 2020 in cs.CL, cs.AI, and cs.LG
Language Models are Open Knowledge Graphs

Abstract: This paper shows how to construct knowledge graphs (KGs) from pre-trained LLMs (e.g., BERT, GPT-2/3), without human supervision. Popular KGs (e.g, Wikidata, NELL) are built in either a supervised or semi-supervised manner, requiring humans to create knowledge. Recent deep LLMs automatically acquire knowledge from large-scale corpora via pre-training. The stored knowledge has enabled the LLMs to improve downstream NLP tasks, e.g., answering questions, and writing code and articles. In this paper, we propose an unsupervised method to cast the knowledge contained within LLMs into KGs. We show that KGs are constructed with a single forward pass of the pre-trained LLMs (without fine-tuning) over the corpora. We demonstrate the quality of the constructed KGs by comparing to two KGs (Wikidata, TAC KBP) created by humans. Our KGs also provide open factual knowledge that is new in the existing KGs. Our code and KGs will be made publicly available.

Insightful Overview of "LLMs are \Open Knowledge Graphs"

The paper "LLMs are \Open Knowledge Graphs" articulates an innovative approach to constructing knowledge graphs (KGs) from pre-trained LLMs (LMs) such as BERT and GPT-2/3. The fundamental proposition of this paper is to extract knowledge from these LMs without relying on human supervision, which marks a significant shift from traditional supervised KG construction methods.

Methodology and Contributions

The methodology centers around the introduction of an unsupervised technique termed "Match and Map" (MaMa). This strategy involves two fundamental stages:

  1. Match Stage: Candidate facts are generated by matching text from a corpus with pre-trained LM knowledge. The technique leverages the attention mechanisms inherent in LMs to identify relationships between entities expressed as triplets (head, relation, tail). Beam search is employed to capture the most probable relationships without any fine-tuning.
  2. Map Stage: These candidate facts are mapped onto structured knowledge graphs. If candidate facts align with existing KG schemas (e.g., Wikidata), they are incorporated directly. Otherwise, they are added to an open schema allowing for knowledge expansion beyond current schemas.

A distinguishing feature of the proposed system is its ability to generate "open KGs." These incorporate both facts within existing KG structures and entirely new facts in open schemas, which are not covered by the existing knowledge bases.

Results and Evaluation

The paper highlights the potential of MaMa through rigorous evaluations against existing knowledge graphs, such as TAC KBP and Wikidata. Remarkable precision was evident in their output, with precision exceeding 60% in several cases, effectively outperforming existing systems like Stanford OpenIE and OpenIE 5.1. Recall capabilities, while lower, provide an avenue for future optimization, especially with larger and more diverse linguistic models.

Moreover, deeper and larger LMs like GPT-2 XL exhibit enhanced performance, suggesting that larger models embed richer and more complex knowledge structures. It’s crucial to note that BERT-based models demonstrated higher recall than their similarly-sized GPT-2 counterparts, indicating the efficacy of BERT’s masked LLMing.

Implications and Future Directions

The implications of extracting open KGs from LMs extend into various domains, primarily for knowledge expansion in AI applications, KG construction, and enhancing deep neural network interpretability. As demonstrated, MaMa can uncover "new-in-the-existing-KG" factual knowledge, broadening the useful scope of KGs.

Looking ahead, advancing this work involves improving recall and exploring the potential of even larger LMs, such as GPT-3 or Megatron-LM. There is also room for refining the alignment algorithms for better entity detection and relation mapping. Furthermore, integrating crowdsourcing evaluations or more sophisticated methods, such as graph neural networks, could enhance both the extraction precision and the understanding of the nuanced knowledge embedded in LMs.

In conclusion, this paper offers a compelling perspective on leveraging the stored knowledge in LLMs to construct and expand knowledge graphs, presenting annotated insights into the knowledge acquisition capacities of unsupervised LMs. This innovation could prove essential in bridging the gap between deep learning models and structured knowledge representation systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Chenguang Wang (59 papers)
  2. Xiao Liu (402 papers)
  3. Dawn Song (229 papers)
Citations (127)
Youtube Logo Streamline Icon: https://streamlinehq.com