Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Bi-consolidating Model for Joint Relational Triple Extraction (2404.03881v5)

Published 5 Apr 2024 in cs.CL

Abstract: Current methods to extract relational triples directly make a prediction based on a possible entity pair in a raw sentence without depending on entity recognition. The task suffers from a serious semantic overlapping problem, in which several relation triples may share one or two entities in a sentence. In this paper, based on a two-dimensional sentence representation, a bi-consolidating model is proposed to address this problem by simultaneously reinforcing the local and global semantic features relevant to a relation triple. This model consists of a local consolidation component and a global consolidation component. The first component uses a pixel difference convolution to enhance semantic information of a possible triple representation from adjacent regions and mitigate noise in neighbouring neighbours. The second component strengthens the triple representation based a channel attention and a spatial attention, which has the advantage to learn remote semantic dependencies in a sentence. They are helpful to improve the performance of both entity identification and relation type classification in relation triple extraction. After evaluated on several publish datasets, the bi-consolidating model achieves competitive performance. Analytical experiments demonstrate the effectiveness of our model for relational triple extraction and give motivation for other natural language processing tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Layer normalization. arXiv preprint arXiv:1607.06450 .
  2. Joint entity recognition and relation extraction as a multi-head selection problem. ESWA 114, 34–45.
  3. Exploiting syntactico-semantic structures for relation extraction, in: ACL, pp. 551–560.
  4. Bert: Pre-training of deep bidirectional transformers for language understanding, in: ACL, pp. 4171–4186.
  5. Knowledge vault: A web-scale approach to probabilistic knowledge fusion, in: KDD, pp. 601–610.
  6. Span-based joint entity and relation extraction with transformer pre-training, in: PAIS-ECAI, IOS Press. pp. 2006–2013.
  7. Graphrel: Modeling text as relational graphs for joint entity and relation extractio, in: ACL, pp. 1409–1418.
  8. Creating training corpora for nlg micro-planning, in: ACL.
  9. Planarized sentence representation for nested named entity recognition. Inform Process Manag 60, 103352.
  10. Table filling multi-task recurrent neural network for joint entity and relation extraction, in: COLING, pp. 2537–2547.
  11. Bitcoin: Bidirectional tagging and supervised contrastive learning based joint relational triple extraction framework. arXiv preprint arXiv:2309.11853 .
  12. Knowledge-based weak supervision for information extraction of overlapping relations, in: ACL, pp. 541–550.
  13. Squeeze-and-excitation networks, in: CVPR, pp. 7132–7141.
  14. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
  15. Tdeer: An efficient translating decoding schema for joint extraction of entities and relations, in: EMNLP, pp. 8055–8064.
  16. Entity-relation extraction as multi-turn question answering, in: ACL, pp. 1340–1350.
  17. End-to-end relation extraction using lstms on sequences and tree structures, in: ACL, pp. 1105–1116.
  18. Od-rte: A one-stage object detection framework for relational triple extraction, in: ACL, pp. 11120–11135.
  19. A novel global feature-oriented relational triple extraction model based on table filling , 2646–2656.
  20. A simple but effective bidirectional framework for relational triple extraction, in: WSDM, pp. 824–832.
  21. Modeling relations and their mentions without labeled text, in: ECML-PKDD, pp. 148–163.
  22. An overview of microsoft academic service (mas) and applications, in: WWW, pp. 243–246.
  23. Dropout: a simple way to prevent neural networks from overfitting. JMLR 15, 1929–1958.
  24. Pixel difference networks for efficient edge detection, in: ICCV, pp. 5117–5127.
  25. Joint entity and relation extraction with set prediction networks. TNNLS .
  26. Progressive multi-task learning with controlled information flow for joint entity and relation extraction, in: AAAI, pp. 13851–13859.
  27. A hierarchical framework for relation extraction with reinforcement learning, in: AAAI, pp. 7072–7079.
  28. Stereorel: Relational triple extraction from a stereoscopic perspective, in: ACL-IJCNLP, pp. 4851–4861.
  29. Residual attention network for image classification, in: CVPR, pp. 3156–3164.
  30. Tplinker: Single-stage joint extraction of entities and relations through token pair linking, in: COLING, ICCL. pp. 1572–1582.
  31. Clfm: Contrastive learning and filter-attention mechanism for joint relation extraction. IJACSA .
  32. A novel cascade binary tagging framework for relational triple extraction, in: ACL, pp. 1476–1488.
  33. Show, attend and tell: Neural image caption generation with visual attention, in: ICML, PMLR. pp. 2048–2057.
  34. Joint extraction of entities and relations based on a novel decomposition strategy, in: ECAI.
  35. Kernel methods for relation extraction. JMLR 3, 1083–1106.
  36. Copymtl: Copy mechanism for joint extraction of entities and relations with multi-task learning, in: AAAI, pp. 9507–9514.
  37. Learning the extraction order of multiple relational facts in a sentence with reinforcement learning, in: EMNLP-IJCNLP, pp. 367–377.
  38. Extracting relational facts by an end-to-end neural model with copy mechanism, in: ACL, pp. 506–514.
  39. A simple overlapping relation extraction method based on dropout, in: IJCNN, IEEE. pp. 01–08.
  40. Rs-tts: A novel joint entity and relation extraction model, in: CSCWD, IEEE. pp. 71–76.
  41. End-to-end neural relation extraction with global optimization, in: EMNLP, pp. 1730–1740.
  42. Btdm: A bi-directional translating decoding model-based relational triple extraction. Applied Sciences 13, 4447.
  43. Prgc: Potential relation and global correspondence based joint relational triple extraction, in: ACL-IJCNN, ACL. pp. 6225–6235.
  44. Joint extraction of entities and relations based on a novel tagging scheme, in: ACL, pp. 1227–1236.
  45. A frustratingly easy approach for entity and relation extraction, in: ACL, pp. 50–61.
  46. Exploring various knowledge in relation extraction, in: ACL, pp. 427–434.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xiaocheng Luo (1 paper)
  2. Yanping Chen (38 papers)
  3. Ruixue Tang (3 papers)
  4. Ruizhang Huang (4 papers)
  5. Yongbin Qin (5 papers)
  6. Caiwei Yang (1 paper)

Summary

We haven't generated a summary for this paper yet.