Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reinforcement Learning for Chatbots Using Clustered Actions and Human-Likeness Rewards (1908.10331v1)

Published 27 Aug 2019 in cs.AI, cs.CL, cs.LG, and cs.NE

Abstract: Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function. We address such problems using clustered actions instead of infinite actions, and a simple but promising reward function based on human-likeness scores derived from human-human dialogue data. We train Deep Reinforcement Learning (DRL) agents using chitchat data in raw text---without any manual annotations. Experimental results using different splits of training data report the following. First, that our agents learn reasonable policies in the environments they get familiarised with, but their performance drops substantially when they are exposed to a test set of unseen dialogues. Second, that the choice of sentence embedding size between 100 and 300 dimensions is not significantly different on test data. Third, that our proposed human-likeness rewards are reasonable for training chatbots as long as they use lengthy dialogue histories of >=10 sentences.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Donghyeon Lee (10 papers)
  2. Seonghan Ryu (4 papers)
  3. Sungja Choi (2 papers)
  4. Inchul Hwang (12 papers)
  5. Jihie Kim (23 papers)
  6. Heriberto CuayƔhuitl (12 papers)
Citations (6)