Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Next Chapter: A Study of Large Language Models in Storytelling (2301.09790v3)

Published 24 Jan 2023 in cs.CL

Abstract: To enhance the quality of generated stories, recent story generation models have been investigating the utilization of higher-level attributes like plots or commonsense knowledge. The application of prompt-based learning with LLMs, exemplified by GPT-3, has exhibited remarkable performance in diverse NLP tasks. This paper conducts a comprehensive investigation, utilizing both automatic and human evaluation, to compare the story generation capacity of LLMs with recent models across three datasets with variations in style, register, and length of stories. The results demonstrate that LLMs generate stories of significantly higher quality compared to other story generation models. Moreover, they exhibit a level of performance that competes with human authors, albeit with the preliminary observation that they tend to replicate real stories in situations involving world knowledge, resembling a form of plagiarism.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Zhuohan Xie (15 papers)
  2. Trevor Cohn (105 papers)
  3. Jey Han Lau (67 papers)
Citations (30)