Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diluted Near-Optimal Expert Demonstrations for Guiding Dialogue Stochastic Policy Optimisation (2012.04687v1)

Published 25 Nov 2020 in cs.CL, cs.AI, and cs.LG

Abstract: A learning dialogue agent can infer its behaviour from interactions with the users. These interactions can be taken from either human-to-human or human-machine conversations. However, human interactions are scarce and costly, making learning from few interactions essential. One solution to speedup the learning process is to guide the agent's exploration with the help of an expert. We present in this paper several imitation learning strategies for dialogue policy where the guiding expert is a near-optimal handcrafted policy. We incorporate these strategies with state-of-the-art reinforcement learning methods based on Q-learning and actor-critic. We notably propose a randomised exploration policy which allows for a seamless hybridisation of the learned policy and the expert. Our experiments show that our hybridisation strategy outperforms several baselines, and that it can accelerate the learning when facing real humans.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Fabrice Lefèvre (8 papers)
  2. Thibault Cordier (3 papers)
  3. Tanguy Urvoy (14 papers)
  4. Lina M. Rojas-Barahona (20 papers)
Citations (4)