Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discriminative Deep Dyna-Q: Robust Planning for Dialogue Policy Learning (1808.09442v2)

Published 28 Aug 2018 in cs.CL, cs.AI, and cs.LG

Abstract: This paper presents a Discriminative Deep Dyna-Q (D3Q) approach to improving the effectiveness and robustness of Deep Dyna-Q (DDQ), a recently proposed framework that extends the Dyna-Q algorithm to integrate planning for task-completion dialogue policy learning. To obviate DDQ's high dependency on the quality of simulated experiences, we incorporate an RNN-based discriminator in D3Q to differentiate simulated experience from real user experience in order to control the quality of training data. Experiments show that D3Q significantly outperforms DDQ by controlling the quality of simulated experience used for planning. The effectiveness and robustness of D3Q is further demonstrated in a domain extension setting, where the agent's capability of adapting to a changing environment is tested.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shang-Yu Su (20 papers)
  2. Xiujun Li (37 papers)
  3. Jianfeng Gao (344 papers)
  4. Jingjing Liu (139 papers)
  5. Yun-Nung Chen (104 papers)
Citations (66)