Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Graph Generation From Text (2211.10511v1)

Published 18 Nov 2022 in cs.CL and cs.LG

Abstract: In this work we propose a novel end-to-end multi-stage Knowledge Graph (KG) generation system from textual inputs, separating the overall process into two stages. The graph nodes are generated first using pretrained LLM, followed by a simple edge construction head, enabling efficient KG extraction from the text. For each stage we consider several architectural choices that can be used depending on the available training resources. We evaluated the model on a recent WebNLG 2020 Challenge dataset, matching the state-of-the-art performance on text-to-RDF generation task, as well as on New York Times (NYT) and a large-scale TekGen datasets, showing strong overall performance, outperforming the existing baselines. We believe that the proposed system can serve as a viable KG construction alternative to the existing linearization or sampling-based graph generation approaches. Our code can be found at https://github.com/IBM/Grapher

Knowledge Graph Generation from Text

The paper "Knowledge Graph Generation from Text" by Igor Melnyk, Pierre Dognin, and Payel Das presents a methodological advancement in the domain of automatic Knowledge Graph (KG) construction. The proposed framework, Grapher, aims to address the challenges underlying the extraction of structured information from textual data using a novel end-to-end multi-stage approach. This paper explores the efficient generation of KGs by leveraging pre-trained LLMs such as T5, encapsulating a two-tier process: node generation followed by edge prediction.

Methodological Overview

The KG generation process is bifurcated into two sequential stages. Initially, the node generation phase employs pre-trained LLMs fine-tuned for entity extraction. This stage is pivotal in translating textual input into nodes that represent the core entities of the resultant graph. Two architectural strategies are explored: Direct Text Node generation where nodes are expressed as sequential text, and Query Node generation borrowing concepts from DETR architectures to handle permutation invariance.

Following node extraction, the Graph edges are generated either through a sequence-based approach using GRUs or via a classification-based model. The choice here hinges on specific efficiency requirements and edge prediction challenges such as edge sparsity. This task aims to delineate the relationships among extracted nodes, crucially impacting the coherence of the overall graph structure.

Evaluation and Results

Grapher is evaluated on several benchmarks including WebNLG 2020, New York Times (NYT), and the large-scale T EK G EN dataset. Notably, the system achieves performance comparable to or surpassing existing state-of-the-art methods across these datasets. For instance, in the WebNLG dataset, the use of text-based nodes coupled with classification edge generation aligns closely with or exceeds established baselines, highlighting the efficacy of this multi-stage model.

On the NYT dataset, it was illustrated that text-based nodes with GRU-driven generation edges achieved optimal results, demonstrating the system's adaptability across varied textual datasets. For the extensive T EK G EN dataset, Grapher's edge generation strategy, optimized for larger datasets, showcased superior efficacy over the linearization-based systems in previous works.

Implications and Future Directions

The implications of this research are substantial for domains requiring structured data extraction from rich textual sources, such as automated decision-making, information retrieval, and advanced NLP tasks. The proposed system not only presents a flexible framework capable of handling variance in input structures but also advances the conversation on generating efficient and scalable KGs.

Looking forward, opportunities for expansion could include addressing the model's computational complexity regarding edge generation and exploring adaptations for larger graph structures. Extending the model's capabilities to multiple languages and reverse-generation tasks (i.e., text from KGs) remains a compelling avenue for subsequent research.

Conclusion

In sum, the paper contributes meaningfully to KG construction by offering a flexible, efficient, and performant multi-stage approach. Through its strategic employment of pre-trained LLMs and innovative design of node and edge generation techniques, Grapher serves as a robust alternative to traditional graph extraction methodologies, advancing the field toward more integrated and intelligent systems for data structuring from extensive textual corpora.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Igor Melnyk (28 papers)
  2. Pierre Dognin (18 papers)
  3. Payel Das (104 papers)
Citations (18)
Github Logo Streamline Icon: https://streamlinehq.com