Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards End-to-End Learning for Efficient Dialogue Agent by Modeling Looking-ahead Ability (1908.05408v1)

Published 15 Aug 2019 in cs.CL

Abstract: Learning an efficient manager of dialogue agent from data with little manual intervention is important, especially for goal-oriented dialogues. However, existing methods either take too many manual efforts (e.g. reinforcement learning methods) or cannot guarantee the dialogue efficiency (e.g. sequence-to-sequence methods). In this paper, we address this problem by proposing a novel end-to-end learning model to train a dialogue agent that can look ahead for several future turns and generate an optimal response to make the dialogue efficient. Our method is data-driven and does not require too much manual work for intervention during system design. We evaluate our method on two datasets of different scenarios and the experimental results demonstrate the efficiency of our model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhuoxuan Jiang (12 papers)
  2. Xian-Ling Mao (76 papers)
  3. Ziming Huang (8 papers)
  4. Jie Ma (205 papers)
  5. Shaochun Li (7 papers)
Citations (5)