2000 character limit reached
Simple Unsupervised Summarization by Contextual Matching
Published 31 Jul 2019 in cs.CL and cs.LG | (1907.13337v1)
Abstract: We propose an unsupervised method for sentence summarization using only language modeling. The approach employs two LLMs, one that is generic (i.e. pretrained), and the other that is specific to the target domain. We show that by using a product-of-experts criteria these are enough for maintaining continuous contextual matching while maintaining output fluency. Experiments on both abstractive and extractive sentence summarization data sets show promising results of our method without being exposed to any paired data.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.