2000 character limit reached
Deep Active Learning for Dialogue Generation (1612.03929v5)
Published 12 Dec 2016 in cs.CL, cs.AI, and cs.NE
Abstract: We propose an online, end-to-end, neural generative conversational model for open-domain dialogue. It is trained using a unique combination of offline two-phase supervised learning and online human-in-the-loop active learning. While most existing research proposes offline supervision or hand-crafted reward functions for online reinforcement, we devise a novel interactive learning mechanism based on hamming-diverse beam search for response generation and one-character user-feedback at each step. Experiments show that our model inherently promotes the generation of semantically relevant and interesting responses, and can be used to train agents with customized personas, moods and conversational styles.
- Nabiha Asghar (9 papers)
- Pascal Poupart (80 papers)
- Xin Jiang (242 papers)
- Hang Li (277 papers)