A Bi-consolidating Model for Joint Relational Triple Extraction (2404.03881v5)
Abstract: Current methods to extract relational triples directly make a prediction based on a possible entity pair in a raw sentence without depending on entity recognition. The task suffers from a serious semantic overlapping problem, in which several relation triples may share one or two entities in a sentence. In this paper, based on a two-dimensional sentence representation, a bi-consolidating model is proposed to address this problem by simultaneously reinforcing the local and global semantic features relevant to a relation triple. This model consists of a local consolidation component and a global consolidation component. The first component uses a pixel difference convolution to enhance semantic information of a possible triple representation from adjacent regions and mitigate noise in neighbouring neighbours. The second component strengthens the triple representation based a channel attention and a spatial attention, which has the advantage to learn remote semantic dependencies in a sentence. They are helpful to improve the performance of both entity identification and relation type classification in relation triple extraction. After evaluated on several publish datasets, the bi-consolidating model achieves competitive performance. Analytical experiments demonstrate the effectiveness of our model for relational triple extraction and give motivation for other natural language processing tasks.
- Layer normalization. arXiv preprint arXiv:1607.06450 .
- Joint entity recognition and relation extraction as a multi-head selection problem. ESWA 114, 34–45.
- Exploiting syntactico-semantic structures for relation extraction, in: ACL, pp. 551–560.
- Bert: Pre-training of deep bidirectional transformers for language understanding, in: ACL, pp. 4171–4186.
- Knowledge vault: A web-scale approach to probabilistic knowledge fusion, in: KDD, pp. 601–610.
- Span-based joint entity and relation extraction with transformer pre-training, in: PAIS-ECAI, IOS Press. pp. 2006–2013.
- Graphrel: Modeling text as relational graphs for joint entity and relation extractio, in: ACL, pp. 1409–1418.
- Creating training corpora for nlg micro-planning, in: ACL.
- Planarized sentence representation for nested named entity recognition. Inform Process Manag 60, 103352.
- Table filling multi-task recurrent neural network for joint entity and relation extraction, in: COLING, pp. 2537–2547.
- Bitcoin: Bidirectional tagging and supervised contrastive learning based joint relational triple extraction framework. arXiv preprint arXiv:2309.11853 .
- Knowledge-based weak supervision for information extraction of overlapping relations, in: ACL, pp. 541–550.
- Squeeze-and-excitation networks, in: CVPR, pp. 7132–7141.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
- Tdeer: An efficient translating decoding schema for joint extraction of entities and relations, in: EMNLP, pp. 8055–8064.
- Entity-relation extraction as multi-turn question answering, in: ACL, pp. 1340–1350.
- End-to-end relation extraction using lstms on sequences and tree structures, in: ACL, pp. 1105–1116.
- Od-rte: A one-stage object detection framework for relational triple extraction, in: ACL, pp. 11120–11135.
- A novel global feature-oriented relational triple extraction model based on table filling , 2646–2656.
- A simple but effective bidirectional framework for relational triple extraction, in: WSDM, pp. 824–832.
- Modeling relations and their mentions without labeled text, in: ECML-PKDD, pp. 148–163.
- An overview of microsoft academic service (mas) and applications, in: WWW, pp. 243–246.
- Dropout: a simple way to prevent neural networks from overfitting. JMLR 15, 1929–1958.
- Pixel difference networks for efficient edge detection, in: ICCV, pp. 5117–5127.
- Joint entity and relation extraction with set prediction networks. TNNLS .
- Progressive multi-task learning with controlled information flow for joint entity and relation extraction, in: AAAI, pp. 13851–13859.
- A hierarchical framework for relation extraction with reinforcement learning, in: AAAI, pp. 7072–7079.
- Stereorel: Relational triple extraction from a stereoscopic perspective, in: ACL-IJCNLP, pp. 4851–4861.
- Residual attention network for image classification, in: CVPR, pp. 3156–3164.
- Tplinker: Single-stage joint extraction of entities and relations through token pair linking, in: COLING, ICCL. pp. 1572–1582.
- Clfm: Contrastive learning and filter-attention mechanism for joint relation extraction. IJACSA .
- A novel cascade binary tagging framework for relational triple extraction, in: ACL, pp. 1476–1488.
- Show, attend and tell: Neural image caption generation with visual attention, in: ICML, PMLR. pp. 2048–2057.
- Joint extraction of entities and relations based on a novel decomposition strategy, in: ECAI.
- Kernel methods for relation extraction. JMLR 3, 1083–1106.
- Copymtl: Copy mechanism for joint extraction of entities and relations with multi-task learning, in: AAAI, pp. 9507–9514.
- Learning the extraction order of multiple relational facts in a sentence with reinforcement learning, in: EMNLP-IJCNLP, pp. 367–377.
- Extracting relational facts by an end-to-end neural model with copy mechanism, in: ACL, pp. 506–514.
- A simple overlapping relation extraction method based on dropout, in: IJCNN, IEEE. pp. 01–08.
- Rs-tts: A novel joint entity and relation extraction model, in: CSCWD, IEEE. pp. 71–76.
- End-to-end neural relation extraction with global optimization, in: EMNLP, pp. 1730–1740.
- Btdm: A bi-directional translating decoding model-based relational triple extraction. Applied Sciences 13, 4447.
- Prgc: Potential relation and global correspondence based joint relational triple extraction, in: ACL-IJCNN, ACL. pp. 6225–6235.
- Joint extraction of entities and relations based on a novel tagging scheme, in: ACL, pp. 1227–1236.
- A frustratingly easy approach for entity and relation extraction, in: ACL, pp. 50–61.
- Exploring various knowledge in relation extraction, in: ACL, pp. 427–434.
- Xiaocheng Luo (1 paper)
- Yanping Chen (38 papers)
- Ruixue Tang (3 papers)
- Ruizhang Huang (4 papers)
- Yongbin Qin (5 papers)
- Caiwei Yang (1 paper)