Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Abstractified Multi-instance Learning (AMIL) for Biomedical Relation Extraction (2110.12501v1)

Published 24 Oct 2021 in cs.CL and cs.LG

Abstract: Relation extraction in the biomedical domain is a challenging task due to a lack of labeled data and a long-tail distribution of fact triples. Many works leverage distant supervision which automatically generates labeled data by pairing a knowledge graph with raw textual data. Distant supervision produces noisy labels and requires additional techniques, such as multi-instance learning (MIL), to denoise the training signal. However, MIL requires multiple instances of data and struggles with very long-tail datasets such as those found in the biomedical domain. In this work, we propose a novel reformulation of MIL for biomedical relation extraction that abstractifies biomedical entities into their corresponding semantic types. By grouping entities by types, we are better able to take advantage of the benefits of MIL and further denoise the training signal. We show this reformulation, which we refer to as abstractified multi-instance learning (AMIL), improves performance in biomedical relationship extraction. We also propose a novel relationship embedding architecture that further improves model performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. William Hogan (12 papers)
  2. Molly Huang (1 paper)
  3. Yannis Katsis (13 papers)
  4. Tyler Baldwin (4 papers)
  5. Ho-Cheol Kim (5 papers)
  6. Yoshiki Vazquez Baeza (1 paper)
  7. Andrew Bartko (2 papers)
  8. Chun-Nan Hsu (11 papers)
Citations (10)