Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Barack's Wife Hillary: Using Knowledge-Graphs for Fact-Aware Language Modeling (1906.07241v2)

Published 17 Jun 2019 in cs.CL

Abstract: Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge. However, traditional LLMs are only capable of remembering facts seen at training time, and often have difficulty recalling them. To address this, we introduce the knowledge graph LLM (KGLM), a neural LLM with mechanisms for selecting and copying facts from a knowledge graph that are relevant to the context. These mechanisms enable the model to render information it has never seen before, as well as generate out-of-vocabulary tokens. We also introduce the Linked WikiText-2 dataset, a corpus of annotated text aligned to the Wikidata knowledge graph whose contents (roughly) match the popular WikiText-2 benchmark. In experiments, we demonstrate that the KGLM achieves significantly better performance than a strong baseline LLM. We additionally compare different LLM's ability to complete sentences requiring factual knowledge, showing that the KGLM outperforms even very LLMs in generating facts.

Using Knowledge Graphs for Fact-Aware LLMing

The paper "Barack's Wife Hillary: Using Knowledge Graphs for Fact-Aware LLMing" presents a novel approach to LLMing by integrating knowledge graphs for enhancing the factual correctness of generated text. Traditional LLMs have limitations concerning the memorization of facts, often struggling with recalling information accurately. The authors propose the Knowledge Graph LLM (KGLM) to address these issues, leveraging external knowledge graphs to select and copy relevant facts, thereby improving the generation of factually correct sentences.

Key Contributions

  1. Knowledge Graph Integration: KGLM is designed to incorporate facts from a structured external knowledge graph, enabling it to render information not seen during training. This mechanism allows the model to generate out-of-vocabulary entities and rare tokens such as dates and numbers more effectively. The use of knowledge graphs supports the model in maintaining a dynamically growing local knowledge graph, which includes entities mentioned in the text and their related entities through predefined relations.
  2. Dataset Introduction: The Linked WikiText-2 dataset is introduced as a distantly supervised corpus with text aligned to the Wikidata knowledge graph. This dataset provides a platform for training knowledge graph-based LLMs, ensuring a robust evaluation against existing benchmarks like WikiText-2.
  3. Performance Evaluation: Through empirical evaluation, KGLM demonstrates superior performance compared to conventional LLMs, particularly in generating accurate facts. The model achieves lower perplexity and unknown-penalized perplexity, highlighting its proficiency in rendering rare tokens. Fact-completion experiments further underscore KGLM's ability to leverage the knowledge graph to produce accurate factual sentences.

Results and Implications

The KGLM's integration of knowledge graphs represents a significant advancement in improving LLMs' ability to generate text that is consistent with real-world facts. By maintaining a local knowledge graph, the KGLM can reference entities dynamically, offering flexibility in handling rare and unseen entities. This approach enhances the model's accuracy in rendering factual information, potentially reducing incorrect sentence completion based on historical data.

Practically, this improvement can have significant implications for applications requiring factual accuracy, such as automated content generation, educational tools, and AI-driven data analysis. Theoretically, KGLM provides a framework for future developments in AI, promoting more reliable and factually aware LLMs.

Future Research Directions

The work opens several avenues for future research. It highlights the necessity of efficiently managing large-scale knowledge graphs and improving marginalization during inference. Furthermore, the integration of disparate knowledge sources and expansion to different domains remain crucial tasks for enhancing LLMs' capabilities.

In summary, the paper outlines a strategic approach to tackling the limitations of traditional LLMs in fact rendering using knowledge graphs. The KGLM sets a precedent for subsequent research focused on fact-aware LLMing, underlining the importance of linking external knowledge structures with linguistic generation processes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Robert L. Logan IV (13 papers)
  2. Nelson F. Liu (19 papers)
  3. Matthew E. Peters (27 papers)
  4. Matt Gardner (57 papers)
  5. Sameer Singh (96 papers)
Citations (179)