Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language modeling via stochastic processes (2203.11370v2)

Published 21 Mar 2022 in cs.CL and cs.LG

Abstract: Modern LLMs can generate high-quality short texts. However, they often meander or are incoherent when generating longer texts. These issues arise from the next-token-only LLMing objective. Recent work in self-supervised learning suggests that models can learn good latent representations via contrastive learning, which can be effective for discriminative tasks. Our work analyzes the application of contrastive representations for generative tasks, like long text generation. We propose one approach for leveraging constrastive representations, which we call Time Control (TC). TC first learns a contrastive representation of the target text domain, then generates text by decoding from these representations. Compared to domain-specific methods and fine-tuning GPT2 across a variety of text domains, TC performs competitively to methods specific for learning sentence representations on discourse coherence. On long text generation settings, TC preserves the text structure both in terms of ordering (up to $+15\%$ better) and text length consistency (up to $+90\%$ better).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Esin Durmus (38 papers)
  2. Noah Goodman (57 papers)
  3. Tatsunori Hashimoto (80 papers)
  4. Rose E Wang (2 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.