Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MTSS: Learn from Multiple Domain Teachers and Become a Multi-domain Dialogue Expert (2005.10450v1)

Published 21 May 2020 in cs.CL and cs.AI

Abstract: How to build a high-quality multi-domain dialogue system is a challenging work due to its complicated and entangled dialogue state space among each domain, which seriously limits the quality of dialogue policy, and further affects the generated response. In this paper, we propose a novel method to acquire a satisfying policy and subtly circumvent the knotty dialogue state representation problem in the multi-domain setting. Inspired by real school teaching scenarios, our method is composed of multiple domain-specific teachers and a universal student. Each individual teacher only focuses on one specific domain and learns its corresponding domain knowledge and dialogue policy based on a precisely extracted single domain dialogue state representation. Then, these domain-specific teachers impart their domain knowledge and policies to a universal student model and collectively make this student model a multi-domain dialogue expert. Experiment results show that our method reaches competitive results with SOTAs in both multi-domain and single domain setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shuke Peng (2 papers)
  2. Feng Ji (74 papers)
  3. Zehao Lin (38 papers)
  4. Shaobo Cui (15 papers)
  5. Haiqing Chen (29 papers)
  6. Yin Zhang (98 papers)
Citations (11)