Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GEGA: Graph Convolutional Networks and Evidence Retrieval Guided Attention for Enhanced Document-level Relation Extraction (2407.21384v2)

Published 31 Jul 2024 in cs.CL and cs.AI

Abstract: Document-level relation extraction (DocRE) aims to extract relations between entities from unstructured document text. Compared to sentence-level relation extraction, it requires more complex semantic understanding from a broader text context. Currently, some studies are utilizing logical rules within evidence sentences to enhance the performance of DocRE. However, in the data without provided evidence sentences, researchers often obtain a list of evidence sentences for the entire document through evidence retrieval (ER). Therefore, DocRE suffers from two challenges: firstly, the relevance between evidence and entity pairs is weak; secondly, there is insufficient extraction of complex cross-relations between long-distance multi-entities. To overcome these challenges, we propose GEGA, a novel model for DocRE. The model leverages graph neural networks to construct multiple weight matrices, guiding attention allocation to evidence sentences. It also employs multi-scale representation aggregation to enhance ER. Subsequently, we integrate the most efficient evidence information to implement both fully supervised and weakly supervised training processes for the model. We evaluate the GEGA model on three widely used benchmark datasets: DocRED, Re-DocRED, and Revisit-DocRED. The experimental results indicate that our model has achieved comprehensive improvements compared to the existing SOTA model.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Bidirectional recurrent convolutional neural network for relation classification. In In Proceedings of the ACL, pages 756–765.
  2. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In In Proceedings of the EMNLP, pages 1724–1734.
  3. Prism: Enhancing low-resource document-level relation extraction with relation-aware score calibration. In In Findings of the ACL-IJCNLP, pages 39–47.
  4. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. In In Proceedings of the EMNLP-IJCNLP, pages 4925–4936.
  5. Attention guided graph convolutional networks for relation extraction. In In Proceedings of the ACL, pages 241–251.
  6. Does recommend-revise produce reliable annotations? an analysis on missing instances in docred. In In Proceedings of ACL, pages 6241–6252.
  7. Three sentences are all you need: Local path enhanced document relation extraction. In In Proceedings of the ACL-IJCNLP, pages 998–1004.
  8. Document-level n-ary relation extraction with multiscale representation learning. In In Proceedings of the NAACL-HLT, pages 3693–3704.
  9. Key mention pairs guided document-level relation extraction. In In Proceedings of the COLING, pages 1904–1914.
  10. Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. In In Proceedings of the ICLR.
  11. Mrn: A locally and globally mention-based reasoning network for document-level relation extraction. In In Findings of the ACL-IJCNLP, pages 1359–1370.
  12. Anaphor assisted document-level relation extraction. In In Proceedings of the EMNLP, pages 15453–15464.
  13. Dreeam: Guiding attention with evidence for improving document-level relation extraction. arXiv preprint arXiv:2302.08675.
  14. Graph convolution over multiple dependency sub-graphs for relation extraction. In In Proceedings of the COLING, pages 6424–6435.
  15. Reasoning with latent structure refinement for document-level relation extraction. In In Proceedings of the ACL, pages 1546–1557.
  16. Cross-sentence n-ary relation extraction with graph lstms. In Proceedings of the TACL, 5:101–115.
  17. Inter-sentence relation extraction with document-level graph convolutional neural network. In In Proceedings of the ACL, pages 4309–4316.
  18. Daniil Sorokin and Iryna Gurevych. 2017. Context-aware representations for knowledge base relation extraction. In In Proceedings of the EMNLP, pages 1784–1789.
  19. Document-level relation extraction with adaptive focal loss and knowledge distillation. In In Findings of the ACL, pages 1672–1681.
  20. Revisiting docred-addressing the false negative problem in relation extraction. In In Proceedings of the EMNLP, pages 8472–8487.
  21. Hin: Hierarchical inference network for document-level relation extraction. In In Proceedings of the PAKDD, pages 197–209. Springer.
  22. Attention is all you need. In Proceedings of the NeurIPS, 30.
  23. Graph attention networks. In In Proceedings of the ICLR.
  24. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In In Proceedings of the ACL-HLT, pages 872–884.
  25. Fine-tune bert for docred with two-step process. arXiv preprint arXiv:1909.11898.
  26. Sais: Supervising and augmenting intermediate steps for document-level relation extraction. In In Proceedings of the NAACL, pages 2395–2409.
  27. Eider: Empowering document-level relation extraction with efficient evidence extraction and inference-stage fusion. In In Findings of the ACL, pages 257–268.
  28. Show, attend and tell: neural image caption generation with visual attention. In In Proceedings of the ICML, pages 2048–2057.
  29. Docred: A large-scale document-level relation extraction dataset. In In Proceedings of the ACL, pages 764–777.
  30. Coreferential reasoning learning for language representation. In In Proceedings of the EMNLP, pages 7170–7186.
  31. Relation classification via convolutional deep neural network. In In Proceedings of the COLING, pages 2335–2344.
  32. Sire: Separate intra-and inter-sentential reasoning for document-level relation extraction. In In Findings of the ACL-IJCNLP, pages 524–534.
  33. Double graph based reasoning for document-level relation extraction. In In Proceedings of the EMNLP, pages 1630–1640.
  34. Exploring effective inter-encoder semantic interaction for document-level relation extraction. In In Proceedings of the IJCAI, pages 5278–5286.
  35. Exploring self-distillation based relational reasoning training for document-level relation extraction. In In Proceedings of the AAAI, pages 13967–13975.
  36. Document-level relation extraction as semantic segmentation. In In Proceedings of the IJCAI, page 3999–4006.
  37. Graph convolution over pruned dependency trees improves relation extraction. In In Proceedings of the EMNLP, pages 2205–2215.
  38. Learning deep bilinear transformation for fine-grained image representation. In Proceedings of the NeurIPS, 32.
  39. Document-level relation extraction with adaptive thresholding and localized context pooling. In In Proceedings of the AAAI, pages 14612–14620.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yanxu Mao (5 papers)
  2. Peipei Liu (14 papers)
  3. Tiehan Cui (5 papers)
  4. Xiaohui Chen (73 papers)
  5. Zuhui Yue (1 paper)
  6. Zheng Li (326 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets