Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Experiment Segmentation in Scientific Discourse as Clause-level Structured Prediction using Recurrent Neural Networks (1702.05398v1)

Published 17 Feb 2017 in cs.CL

Abstract: We propose a deep learning model for identifying structure within experiment narratives in scientific literature. We take a sequence labeling approach to this problem, and label clauses within experiment narratives to identify the different parts of the experiment. Our dataset consists of paragraphs taken from open access PubMed papers labeled with rhetorical information as a result of our pilot annotation. Our model is a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) cells that labels clauses. The clause representations are computed by combining word representations using a novel attention mechanism that involves a separate RNN. We compare this model against LSTMs where the input layer has simple or no attention and a feature rich CRF model. Furthermore, we describe how our work could be useful for information extraction from scientific literature.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Pradeep Dasigi (29 papers)
  2. Gully A. P. C. Burns (1 paper)
  3. Eduard Hovy (115 papers)
  4. Anita de Waard (1 paper)
Citations (26)

Summary

We haven't generated a summary for this paper yet.