Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attribute Alignment: Controlling Text Generation from Pre-trained Language Models (2103.11070v2)

Published 20 Mar 2021 in cs.CL

Abstract: LLMs benefit from training with a large amount of unlabeled text, which gives them increasingly fluent and diverse generation capabilities. However, using these models for text generation that takes into account target attributes, such as sentiment polarity or specific topics, remains a challenge. We propose a simple and flexible method for controlling text generation by aligning disentangled attribute representations. In contrast to recent efforts on training a discriminator to perturb the token level distribution for an attribute, we use the same data to learn an alignment function to guide the pre-trained, non-controlled LLM to generate texts with the target attribute without changing the original LLM parameters. We evaluate our method on sentiment- and topic-controlled generation, and show large performance gains over previous methods while retaining fluency and diversity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dian Yu (78 papers)
  2. Zhou Yu (206 papers)
  3. Kenji Sagae (6 papers)
Citations (36)