Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning New Skills after Deployment: Improving open-domain internet-driven dialogue with human feedback (2208.03270v2)

Published 5 Aug 2022 in cs.CL and cs.AI

Abstract: Frozen models trained to mimic static datasets can never improve their performance. Models that can employ internet-retrieval for up-to-date information and obtain feedback from humans during deployment provide the promise of both adapting to new information, and improving their performance. In this work we study how to improve internet-driven conversational skills in such a learning framework. We collect deployment data, which we make publicly available, of human interactions, and collect various types of human feedback -- including binary quality measurements, free-form text feedback, and fine-grained reasons for failure. We then study various algorithms for improving from such feedback, including standard supervised learning, rejection sampling, model-guiding and reward-based learning, in order to make recommendations on which type of feedback and algorithms work best. We find the recently introduced Director model (Arora et al., '22) shows significant improvements over other existing approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jing Xu (244 papers)
  2. Megan Ung (10 papers)
  3. Mojtaba Komeili (13 papers)
  4. Kushal Arora (13 papers)
  5. Y-Lan Boureau (26 papers)
  6. Jason Weston (130 papers)
Citations (35)