Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Task Learning for Speaker-Role Adaptation in Neural Conversation Models (1710.07388v1)

Published 20 Oct 2017 in cs.CL

Abstract: Building a persona-based conversation agent is challenging owing to the lack of large amounts of speaker-specific conversation data for model training. This paper addresses the problem by proposing a multi-task learning approach to training neural conversation models that leverages both conversation data across speakers and other types of data pertaining to the speaker and speaker roles to be modeled. Experiments show that our approach leads to significant improvements over baseline model quality, generating responses that capture more precisely speakers' traits and speaking styles. The model offers the benefits of being algorithmically simple and easy to implement, and not relying on large quantities of data representing specific individual speakers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yi Luan (25 papers)
  2. Chris Brockett (37 papers)
  3. Bill Dolan (45 papers)
  4. Jianfeng Gao (344 papers)
  5. Michel Galley (50 papers)
Citations (81)