Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Language Generation with Sentence Coherence Objective (2009.06358v1)

Published 7 Sep 2020 in cs.CL, cs.LG, and stat.ML

Abstract: Conditional story generation and contextual text continuation have become increasingly popular topics in NLP community. Existing models are often prone to output paragraphs of texts that gradually diverge from the given prompt. Although the generated text may have a reasonable perplexity and diversity, it could easily be identified by human as gibberish. The goal of our project is to improve the coherence and consistency across sentences in a language-generation model. We aim to solve this issue by first training a sentence pair coherence classifier with GPT-2 pretrained model, and then co-train the GPT-2 LLM with this new coherence objective using a method analogous to the REINFORCE algorithm. This fine-tuned LLM is able to generate lengthy paragraph conditioned on a given topic without diverging too much. The simplicity of this model allows it to be applicable to a variety of underlying LLM architecture since it only modifies the final layer of the pre-trained model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ruixiao Sun (9 papers)
  2. Jie Yang (516 papers)
  3. Mehrdad Yousefzadeh (3 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.