Papers
Topics
Authors
Recent
Search
2000 character limit reached

Toward Better Storylines with Sentence-Level Language Models

Published 11 May 2020 in cs.CL | (2005.05255v1)

Abstract: We propose a sentence-level LLM which selects the next sentence in a story from a finite set of fluent alternatives. Since it does not need to model fluency, the sentence-level LLM can focus on longer range dependencies, which are crucial for multi-sentence coherence. Rather than dealing with individual words, our method treats the story so far as a list of pre-trained sentence embeddings and predicts an embedding for the next sentence, which is more efficient than predicting word embeddings. Notably this allows us to consider a large number of candidates for the next sentence during training. We demonstrate the effectiveness of our approach with state-of-the-art accuracy on the unsupervised Story Cloze task and with promising results on larger-scale next sentence prediction tasks.

Citations (24)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.