Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UniDU: Towards A Unified Generative Dialogue Understanding Framework (2204.04637v2)

Published 10 Apr 2022 in cs.CL

Abstract: With the development of pre-trained LLMs, remarkable success has been witnessed in dialogue understanding (DU). However, current DU approaches usually employ independent models for each distinct DU task without considering shared knowledge across different DU tasks. In this paper, we propose a unified generative dialogue understanding framework, named {\em UniDU}, to achieve effective information exchange across diverse DU tasks. Here, we reformulate all DU tasks into a unified prompt-based generative model paradigm. More importantly, a novel model-agnostic multi-task training strategy (MATS) is introduced to dynamically adapt the weights of diverse tasks for best knowledge sharing during training, based on the nature and available data of each task. Experiments on ten DU datasets covering five fundamental DU tasks show that the proposed UniDU framework largely outperforms task-specific well-designed methods on all tasks. MATS also reveals the knowledge-sharing structure of these tasks. Finally, UniDU obtains promising performance in the unseen dialogue domain, showing the great potential for generalization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhi Chen (235 papers)
  2. Lu Chen (244 papers)
  3. Bei Chen (56 papers)
  4. Libo Qin (77 papers)
  5. Yuncong Liu (7 papers)
  6. Su Zhu (29 papers)
  7. Jian-Guang Lou (69 papers)
  8. Kai Yu (201 papers)
Citations (13)