Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Supervised Seeded Iterated Learning for Interactive Language Learning (2010.02975v1)

Published 6 Oct 2020 in cs.CL

Abstract: Language drift has been one of the major obstacles to train LLMs through interaction. When word-based conversational agents are trained towards completing a task, they tend to invent their language rather than leveraging natural language. In recent literature, two general methods partially counter this phenomenon: Supervised Selfplay (S2P) and Seeded Iterated Learning (SIL). While S2P jointly trains interactive and supervised losses to counter the drift, SIL changes the training dynamics to prevent language drift from occurring. In this paper, we first highlight their respective weaknesses, i.e., late-stage training collapses and higher negative likelihood when evaluated on human corpus. Given these observations, we introduce Supervised Seeded Iterated Learning to combine both methods to minimize their respective weaknesses. We then show the effectiveness of \algo in the language-drift translation game.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuchen Lu (17 papers)
  2. Soumye Singhal (10 papers)
  3. Florian Strub (39 papers)
  4. Olivier Pietquin (90 papers)
  5. Aaron Courville (201 papers)
Citations (8)