Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Teachable Reasoning Systems: Using a Dynamic Memory of User Feedback for Continual System Improvement (2204.13074v2)

Published 27 Apr 2022 in cs.CL and cs.AI

Abstract: Our goal is a teachable reasoning system for question-answering (QA), where a user can interact with faithful answer explanations, and correct its errors so that the system improves over time. Our approach is to augment a QA model with a dynamic memory of user feedback, containing user-supplied corrections to erroneous model beliefs that users identify during interaction. Retrievals from memory are used as additional context for QA, to help avoid previous mistakes in similar new situations - a novel application of memory-based continuous learning. With simulated feedback, we find that our system (called TeachMe) continually improves with time, and without model retraining, requiring feedback on only 25% of training examples to reach within 1% of the upper-bound (feedback on all examples). Similarly, in experiments with real users, we observe a similar trend, with performance improving by over 15% on a hidden test set after teaching. This suggests new opportunities for using frozen LLMs in an interactive setting where users can inspect, debug, and correct the model's beliefs, leading to improved system's performance over time.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Bhavana Dalvi Mishra (26 papers)
  2. Oyvind Tafjord (49 papers)
  3. Peter Clark (108 papers)
Citations (32)

Summary

We haven't generated a summary for this paper yet.