Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Critic-Guided Decoding for Controlled Text Generation (2212.10938v1)

Published 21 Dec 2022 in cs.CL

Abstract: Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing LLMs (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the LLM and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Minbeom Kim (13 papers)
  2. Hwanhee Lee (36 papers)
  3. Kang Min Yoo (40 papers)
  4. Joonsuk Park (24 papers)
  5. Hwaran Lee (31 papers)
  6. Kyomin Jung (76 papers)
Citations (28)