Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Supervised Approach to Extractive Summarisation of Scientific Papers (1706.03946v1)

Published 13 Jun 2017 in cs.CL, cs.AI, cs.NE, stat.AP, and stat.ML

Abstract: Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ed Collins (1 paper)
  2. Isabelle Augenstein (131 papers)
  3. Sebastian Riedel (140 papers)
Citations (96)

Summary

We haven't generated a summary for this paper yet.