Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Style Control for Schema-Guided Natural Language Generation (2109.12211v1)

Published 24 Sep 2021 in cs.CL

Abstract: Natural Language Generation (NLG) for task-oriented dialogue systems focuses on communicating specific content accurately, fluently, and coherently. While these attributes are crucial for a successful dialogue, it is also desirable to simultaneously accomplish specific stylistic goals, such as response length, point-of-view, descriptiveness, sentiment, formality, and empathy. In this work, we focus on stylistic control and evaluation for schema-guided NLG, with joint goals of achieving both semantic and stylistic control. We experiment in detail with various controlled generation methods for large pretrained LLMs: specifically, conditional training, guided fine-tuning, and guided decoding. We discuss their advantages and limitations, and evaluate them with a broad range of automatic and human evaluation metrics. Our results show that while high style accuracy and semantic correctness are easier to achieve for more lexically-defined styles with conditional training, stylistic control is also achievable for more semantically complex styles using discriminator-based guided decoding methods. The results also suggest that methods that are more scalable (with less hyper-parameters tuning) and that disentangle content generation and stylistic variations are more effective at achieving semantic correctness and style accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Alicia Y. Tsai (7 papers)
  2. Shereen Oraby (26 papers)
  3. Vittorio Perera (4 papers)
  4. Jiun-Yu Kao (7 papers)
  5. Yuheng Du (7 papers)
  6. Anjali Narayan-Chen (10 papers)
  7. Tagyoung Chung (26 papers)
  8. Dilek Hakkani-Tur (94 papers)
Citations (10)