Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong Learning in Task-Oriented Dialogue (2210.07783v2)

Published 14 Oct 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD) systems. To address the catastrophic forgetting issue of LL, generative replay methods are widely employed to consolidate past knowledge with generated pseudo samples. However, most existing generative replay methods use only a single task-specific token to control their models. This scheme is usually not strong enough to constrain the generative model due to insufficient information involved. In this paper, we propose a novel method, prompt conditioned VAE for lifelong learning (PCLL), to enhance generative replay by incorporating tasks' statistics. PCLL captures task-specific distributions with a conditional variational autoencoder, conditioned on natural language prompts to guide the pseudo-sample generation. Moreover, it leverages a distillation process to further consolidate past knowledge by alleviating the noise in pseudo samples. Experiments on natural language understanding tasks of ToD systems demonstrate that PCLL significantly outperforms competitive baselines in building LL models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yingxiu Zhao (13 papers)
  2. Yinhe Zheng (30 papers)
  3. Zhiliang Tian (32 papers)
  4. Chang Gao (54 papers)
  5. Bowen Yu (89 papers)
  6. Haiyang Yu (109 papers)
  7. Yongbin Li (128 papers)
  8. Jian Sun (415 papers)
  9. Nevin L. Zhang (44 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.