Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Conversational Networks (2106.08484v2)

Published 15 Jun 2021 in cs.CL and cs.HC

Abstract: Inspired by recent work in meta-learning and generative teaching networks, we propose a framework called Generative Conversational Networks, in which conversational agents learn to generate their own labelled training data (given some seed data) and then train themselves from that data to perform a given task. We use reinforcement learning to optimize the data generation process where the reward signal is the agent's performance on the task. The task can be any language-related task, from intent detection to full task-oriented conversations. In this work, we show that our approach is able to generalise from seed data and performs well in limited data and limited computation settings, with significant gains for intent detection and slot tagging across multiple datasets: ATIS, TOD, SNIPS, and Restaurants8k. We show an average improvement of 35% in intent detection and 21% in slot tagging over a baseline model trained from the seed data. We also conduct an analysis of the novelty of the generated data and provide generated examples for intent detection, slot tagging, and non-goal oriented conversations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Alexandros Papangelis (23 papers)
  2. Karthik Gopalakrishnan (34 papers)
  3. Aishwarya Padmakumar (17 papers)
  4. Seokhwan Kim (29 papers)
  5. Gokhan Tur (47 papers)
  6. Dilek Hakkani-Tur (94 papers)
Citations (15)