Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Plug-and-Play Conversational Models (2010.04344v1)

Published 9 Oct 2020 in cs.CL and cs.AI

Abstract: There has been considerable progress made towards conversational models that generate coherent and fluent responses; however, this often involves training LLMs on large dialogue datasets, such as Reddit. These large conversational models provide little control over the generated responses, and this control is further limited in the absence of annotated conversational datasets for attribute specific generation that can be used for fine-tuning the model. In this paper, we first propose and evaluate plug-and-play methods for controllable response generation, which does not require dialogue specific datasets and does not rely on fine-tuning a large model. While effective, the decoding procedure induces considerable computational overhead, rendering the conversational model unsuitable for interactive usage. To overcome this, we introduce an approach that does not require further computation at decoding time, while also does not require any fine-tuning of a LLM. We demonstrate, through extensive automatic and human evaluation, a high degree of control over the generated conversational responses with regard to multiple desired attributes, while being fluent.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Andrea Madotto (64 papers)
  2. Etsuko Ishii (18 papers)
  3. Zhaojiang Lin (45 papers)
  4. Sumanth Dathathri (14 papers)
  5. Pascale Fung (150 papers)
Citations (51)