Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Double Graph Based Reasoning for Document-level Relation Extraction (2009.13752v1)

Published 29 Sep 2020 in cs.CL, cs.AI, and cs.LG

Abstract: Document-level relation extraction aims to extract relations among entities within a document. Different from sentence-level relation extraction, it requires reasoning over multiple sentences across a document. In this paper, we propose Graph Aggregation-and-Inference Network (GAIN) featuring double graphs. GAIN first constructs a heterogeneous mention-level graph (hMG) to model complex interaction among different mentions across the document. It also constructs an entity-level graph (EG), based on which we propose a novel path reasoning mechanism to infer relations between entities. Experiments on the public dataset, DocRED, show GAIN achieves a significant performance improvement (2.85 on F1) over the previous state-of-the-art. Our code is available at https://github.com/DreamInvoker/GAIN .

Double Graph Based Reasoning for Document-level Relation Extraction

The paper "Double Graph Based Reasoning for Document-level Relation Extraction" presents an innovative approach to the challenging task of document-level relation extraction (RE), with a focus on reasoning over multiple sentences. Unlike sentence-level RE, document-level extraction necessitates a holistic understanding of entities and their relations distributed throughout a text document. The authors propose a Graph Aggregation-and-Inference Network (GAIN), utilizing a double graph approach to address the complex inter-sentential and logical reasoning challenges inherent in this task.

Overview of the GAIN Approach

GAIN introduces a novel architecture featuring two types of graphs: a heterogeneous mention-level graph (hMG) and an entity-level graph (EG). This dual-graph setup allows the model to effectively aggregate and reason over document-spanning information. The mention-level graph is designed to capture interactions among entity mentions in the text, utilizing nodes for each mention and a document node to model overarching document context. It incorporates intra-entity, inter-entity, and document edges to connect mentions within and across sentences. A Graph Convolutional Network (GCN) is employed to learn a document-aware representation for each mention node.

The aggregation of mention nodes into entity-level representations forms the basis for the EG. This graph, which merges mention information, facilitates reasoning about multi-hop relations between entities through a novel path reasoning mechanism. The path reasoning exploits possible paths between entity pairs to infer relations, effectively modeling relational chains that require logical inference across sentences.

Performance and Experimental Results

The proposed GAIN model demonstrates significant performance improvements over the state-of-the-art on the DocRED dataset, achieving a notable 2.85% increase in F1 score. The efficacy of GAIN is underscored by its ability to outperform existing models in both inter-sentence relation extraction and inferential reasoning scenarios. Detailed ablation studies highlight the importance of each graph component and reasoning mechanism in enhancing the model's overall capability.

Implications and Future Directions

The implications of this research are substantial, both practically and theoretically. From a practical standpoint, GAIN's design holds promise for improving the extraction of knowledge from large unstructured text corpora, which is critical for applications in knowledge graph construction and question answering. Theoretically, the paper advances graph-based neural network methodologies by demonstrating the benefits of leveraging heterogeneous graph structures for complex relational reasoning tasks.

In terms of future developments, further exploration could extend the path reasoning to multi-hop scenarios beyond two steps, refining the architecture's ability to handle even more intricate document structures. Additionally, incorporating contextual information from pre-trained LLMs like BERT shows potential for enhancing entity representations, as indicated by the GAIN-BERT implementations.

Overall, the research provides a meaningful contribution to document-level relation extraction, paving the way for more sophisticated reasoning models in natural language processing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shuang Zeng (25 papers)
  2. Runxin Xu (30 papers)
  3. Baobao Chang (80 papers)
  4. Lei Li (1293 papers)
Citations (212)
X Twitter Logo Streamline Icon: https://streamlinehq.com