Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning (1801.06176v3)

Published 18 Jan 2018 in cs.CL, cs.AI, cs.LG, and cs.NE

Abstract: Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to degrade the agent. To address these issues, we present Deep Dyna-Q, which to our knowledge is the first deep RL framework that integrates planning for task-completion dialogue policy learning. We incorporate into the dialogue agent a model of the environment, referred to as the world model, to mimic real user response and generate simulated experience. During dialogue policy learning, the world model is constantly updated with real user experience to approach real user behavior, and in turn, the dialogue agent is optimized using both real experience and simulated experience. The effectiveness of our approach is demonstrated on a movie-ticket booking task in both simulated and human-in-the-loop settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Baolin Peng (72 papers)
  2. Xiujun Li (37 papers)
  3. Jianfeng Gao (344 papers)
  4. Jingjing Liu (139 papers)
  5. Kam-Fai Wong (92 papers)
  6. Shang-Yu Su (20 papers)
Citations (158)