Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hybrid Attention-Based Transformer Block Model for Distant Supervision Relation Extraction (2003.11518v2)

Published 10 Mar 2020 in cs.CL and cs.LG

Abstract: With an exponential explosive growth of various digital text information, it is challenging to efficiently obtain specific knowledge from massive unstructured text information. As one basic task for NLP, relation extraction aims to extract the semantic relation between entity pairs based on the given text. To avoid manual labeling of datasets, distant supervision relation extraction (DSRE) has been widely used, aiming to utilize knowledge base to automatically annotate datasets. Unfortunately, this method heavily suffers from wrong labelling due to the underlying strong assumptions. To address this issue, we propose a new framework using hybrid attention-based Transformer block with multi-instance learning to perform the DSRE task. More specifically, the Transformer block is firstly used as the sentence encoder to capture syntactic information of sentences, which mainly utilizes multi-head self-attention to extract features from word level. Then, a more concise sentence-level attention mechanism is adopted to constitute the bag representation, aiming to incorporate valid information of each sentence to effectively represent the bag. Experimental results on the public dataset New York Times (NYT) demonstrate that the proposed approach can outperform the state-of-the-art algorithms on the evaluation dataset, which verifies the effectiveness of our model for the DSRE task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yan Xiao (32 papers)
  2. Yaochu Jin (108 papers)
  3. Ran Cheng (130 papers)
  4. Kuangrong Hao (12 papers)
Citations (30)