Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Collaborative Multi-Agent Dialogue Model Training Via Reinforcement Learning (1907.05507v2)

Published 11 Jul 2019 in cs.HC and cs.CL

Abstract: We present the first complete attempt at concurrently training conversational agents that communicate only via self-generated language. Using DSTC2 as seed data, we trained natural language understanding (NLU) and generation (NLG) networks for each agent and let the agents interact online. We model the interaction as a stochastic collaborative game where each agent (player) has a role ("assistant", "tourist", "eater", etc.) and their own objectives, and can only interact via natural language they generate. Each agent, therefore, needs to learn to operate optimally in an environment with multiple sources of uncertainty (its own NLU and NLG, the other agent's NLU, Policy, and NLG). In our evaluation, we show that the stochastic-game agents outperform deep learning based supervised baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Alexandros Papangelis (23 papers)
  2. Yi-Chia Wang (12 papers)
  3. Piero Molino (18 papers)
  4. Gokhan Tur (47 papers)
Citations (32)
Youtube Logo Streamline Icon: https://streamlinehq.com