2000 character limit reached
Continual adaptation for efficient machine communication (1911.09896v2)
Published 22 Nov 2019 in cs.CL
Abstract: To communicate with new partners in new contexts, humans rapidly form new linguistic conventions. Recent neural LLMs are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on the fly as humans do. We introduce an interactive repeated reference task as a benchmark for models of adaptation in communication and propose a regularized continual learning framework that allows an artificial agent initialized with a generic LLM to more accurately and efficiently communicate with a partner over time. We evaluate this framework through simulations on COCO and in real-time reference game experiments with human partners.
- Robert D. Hawkins (28 papers)
- Minae Kwon (10 papers)
- Dorsa Sadigh (162 papers)
- Noah D. Goodman (83 papers)