Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generating Multiple-Length Summaries via Reinforcement Learning for Unsupervised Sentence Summarization (2212.10843v1)

Published 21 Dec 2022 in cs.CL and cs.AI

Abstract: Sentence summarization shortens given texts while maintaining core contents of the texts. Unsupervised approaches have been studied to summarize texts without human-written summaries. However, recent unsupervised models are extractive, which remove words from texts and thus they are less flexible than abstractive summarization. In this work, we devise an abstractive model based on reinforcement learning without ground-truth summaries. We formulate the unsupervised summarization based on the Markov decision process with rewards representing the summary quality. To further enhance the summary quality, we develop a multi-summary learning mechanism that generates multiple summaries with varying lengths for a given text, while making the summaries mutually enhance each other. Experimental results show that the proposed model substantially outperforms both abstractive and extractive models, yet frequently generating new words not contained in input texts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dongmin Hyun (14 papers)
  2. Xiting Wang (42 papers)
  3. Chanyoung Park (83 papers)
  4. Xing Xie (220 papers)
  5. Hwanjo Yu (57 papers)
Citations (6)