Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep-speare: A Joint Neural Model of Poetic Language, Meter and Rhyme (1807.03491v1)

Published 10 Jul 2018 in cs.CL

Abstract: In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla LLM captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jey Han Lau (67 papers)
  2. Trevor Cohn (105 papers)
  3. Timothy Baldwin (125 papers)
  4. Julian Brooke (1 paper)
  5. Adam Hammond (4 papers)
Citations (70)