Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Hierarchical Attention Based Seq2seq Model for Chinese Lyrics Generation (1906.06481v1)

Published 15 Jun 2019 in cs.CL and cs.LG

Abstract: In this paper, we comprehensively study on context-aware generation of Chinese song lyrics. Conventional text generative models generate a sequence or sentence word by word, failing to consider the contextual relationship between sentences. Taking account into the characteristics of lyrics, a hierarchical attention based Seq2Seq (Sequence-to-Sequence) model is proposed for Chinese lyrics generation. With encoding of word-level and sentence-level contextual information, this model promotes the topic relevance and consistency of generation. A large Chinese lyrics corpus is also leveraged for model training. Eventually, results of automatic and human evaluations demonstrate that our model is able to compose complete Chinese lyrics with one united topic constraint.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Haoshen Fan (2 papers)
  2. Jie Wang (480 papers)
  3. Bojin Zhuang (10 papers)
  4. Shaojun Wang (29 papers)
  5. Jing Xiao (267 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.