Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multijugate Dual Learning for Low-Resource Task-Oriented Dialogue System (2305.16106v1)

Published 25 May 2023 in cs.CL

Abstract: Dialogue data in real scenarios tend to be sparsely available, rendering data-starved end-to-end dialogue systems trained inadequately. We discover that data utilization efficiency in low-resource scenarios can be enhanced by mining alignment information uncertain utterance and deterministic dialogue state. Therefore, we innovatively implement dual learning in task-oriented dialogues to exploit the correlation of heterogeneous data. In addition, the one-to-one duality is converted into a multijugate duality to reduce the influence of spurious correlations in dual training for generalization. Without introducing additional parameters, our method could be implemented in arbitrary networks. Extensive empirical analyses demonstrate that our proposed method improves the effectiveness of end-to-end task-oriented dialogue systems under multiple benchmarks and obtains state-of-the-art results in low-resource scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shimin Li (22 papers)
  2. Xiaotian Zhang (35 papers)
  3. Yanjun Zheng (3 papers)
  4. Linyang Li (57 papers)
  5. Xipeng Qiu (257 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.