Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Long Text Generation by Modeling Sentence-Level and Discourse-Level Coherence (2105.08963v1)

Published 19 May 2021 in cs.CL

Abstract: Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the decoder to capture the high-level semantics and discourse structures in the context beyond token-level co-occurrence. In this paper, we propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process. To this end, we propose two pretraining objectives to learn the representations by predicting inter-sentence semantic similarity and distinguishing between normal and shuffled sentence orders. Extensive experiments show that our model can generate more coherent texts than state-of-the-art baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jian Guan (65 papers)
  2. Xiaoxi Mao (14 papers)
  3. Changjie Fan (79 papers)
  4. Zitao Liu (76 papers)
  5. Wenbiao Ding (28 papers)
  6. Minlie Huang (225 papers)
Citations (71)