Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction (1802.10569v1)

Published 28 Feb 2018 in cs.CL

Abstract: Most work in relation extraction forms a prediction by looking at a short span of text within a single sentence containing a single entity pair mention. This approach often does not consider interactions across mentions, requires redundant computation for each mention pair, and ignores relationships expressed across sentence boundaries. These problems are exacerbated by the document- (rather than sentence-) level annotation common in biological text. In response, we propose a model which simultaneously predicts relationships between all mention pairs in a document. We form pairwise predictions over entire paper abstracts using an efficient self-attention encoder. All-pairs mention scores allow us to perform multi-instance learning by aggregating over mentions to form entity pair representations. We further adapt to settings without mention-level annotation by jointly training to predict named entities and adding a corpus of weakly labeled data. In experiments on two Biocreative benchmark datasets, we achieve state of the art performance on the Biocreative V Chemical Disease Relation dataset for models without external KB resources. We also introduce a new dataset an order of magnitude larger than existing human-annotated biological information extraction datasets and more accurate than distantly supervised alternatives.

Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction: A Summary

The paper "Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction" presents an innovative approach to relation extraction (RE) in biological texts, addressing key limitations prevalent in traditional methods. Typical RE models focus on single-sentence contexts and individual entity pair mentions, often resulting in redundancy and inability to capture inter-sentential relationships — a crucial aspect given that a significant percentage of biological interactions span across sentences. The proposed work, authored by Patrick Verga, Emma Strubell, and Andrew McCallum from the University of Massachusetts Amherst, introduces a model that utilizes self-attention mechanisms to overcome these challenges.

Model Design and Methodology

The framework, termed Bi-affine Relation Attention Networks (BRANs), employs self-attention encoders to form relation predictions across all mention pairs within an entire document. By leveraging self-attention, the model captures rich interactions beyond sentence boundaries, essential for biological texts where annotations are typically document-level, rather than at the sentence-level. The model is particularly adept at scenarios without explicit mention-level annotation, employing multi-instance learning to derive robust entity pair representations.

The architecture integrates the Transformer model's self-attention components and convolutional layers to efficiently encode long sequences, ensuring token representations benefit from cross-sentence context. The inclusion of bi-affine operations enables simultaneous scoring of pairwise relations across the document, substantially reducing computational overhead associated with encoding mention pairs independently.

Experimental Evaluation

The model's efficacy is validated on prominent benchmark datasets, namely Biocreative V Chemical Disease Relation (CDR) and Biocreative VI ChemProt datasets. Notably, on the CDR dataset, BRANs achieve state-of-the-art performance in models that exclude external knowledge base (KB) resources, underscoring the model's capacity for efficient RE without dependency on external features. Moreover, the introduction of a significantly larger dataset, an order of magnitude above existing resources, paves the way for broader validation and utilization of RE models.

Implications and Future Directions

From a practical standpoint, the proposed model holds promise for improving automated extraction of complex biological relations from expansive textual resources, thereby facilitating enhanced information retrieval and knowledge synthesis in bioinformatics. Theoretically, the synthesis of multi-instance learning and self-attention presents possibilities for broader applications in natural language processing tasks necessitating context preservation across document lengths.

Future avenues could explore the integration of open information extraction frameworks within the current model, potentially accommodating a more diverse array of relation schemas. Additionally, potential applications extend beyond relation extraction to areas such as co-reference resolution and entity resolution, where pairwise scoring can significantly amplify interpretive accuracy and efficiency.

The code and dataset accompanying this research provide a robust baseline and benchmark for forthcoming RE system innovations, encouraging community-driven enhancement and validation of methodologies in large-scale biological text processing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Patrick Verga (8 papers)
  2. Emma Strubell (60 papers)
  3. Andrew McCallum (132 papers)
Citations (248)