2000 character limit reached
Deep-speare: A Joint Neural Model of Poetic Language, Meter and Rhyme (1807.03491v1)
Published 10 Jul 2018 in cs.CL
Abstract: In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla LLM captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language.
- Jey Han Lau (67 papers)
- Trevor Cohn (105 papers)
- Timothy Baldwin (125 papers)
- Julian Brooke (1 paper)
- Adam Hammond (4 papers)