Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Watch Your Step: Optimal Retrieval for Continual Learning at Scale (2404.10758v2)

Published 16 Apr 2024 in cs.CV

Abstract: In continual learning, a model learns incrementally over time while minimizing interference between old and new tasks. One of the most widely used approaches in continual learning is referred to as replay. Replay methods support interleaved learning by storing past experiences in a replay buffer. Although there are methods for selectively constructing the buffer and reprocessing its contents, there is limited exploration of the problem of selectively retrieving samples from the buffer. Current solutions have been tested in limited settings and, more importantly, in isolation. Existing work has also not explored the impact of duplicate replays on performance. In this work, we propose a framework for evaluating selective retrieval strategies, categorized by simple, independent class- and sample-selective primitives. We evaluated several combinations of existing strategies for selective retrieval and present their performances. Furthermore, we propose a set of strategies to prevent duplicate replays and explore whether new samples with low loss values can be learned without replay. In an effort to match our problem setting to a realistic continual learning pipeline, we restrict our experiments to a setting involving a large, pre-trained, open vocabulary object detection model, which is fully fine-tuned on a sequence of 15 datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Online continual learning with maximally interfered retrieval, 2019a.
  2. Gradient based sample selection for online continual learning, 2019b.
  3. Prototype-sample relation distillation: Towards replay-free continual learning, 2023.
  4. Rag vs fine-tuning: Pipelines, tradeoffs, and a case study on agriculture, 2024.
  5. Towards in-context scene understanding, 2023.
  6. Robocat: A self-improving generalist agent for robotic manipulation, 2023.
  7. Dark experience for general continual learning: a strong, simple baseline, 2020.
  8. New insights on reducing abrupt representation change in online continual learning, 2022.
  9. Efficient lifelong learning with a-gem, 2019.
  10. A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, page 1–1, 2021.
  11. A survey on in-context learning, 2023.
  12. Continually learning representations at scale. In Proceedings of The 2nd Conference on Lifelong Learning Agents, pages 534–547. PMLR, 2023.
  13. Tic-clip: Continual training of clip models, 2023.
  14. Embracing change: Continual learning in deep neural networks. Trends in Cognitive Sciences, 24(12):1028–1040, 2020.
  15. Grasp: A rehearsal policy for efficient online continual learning, 2023.
  16. Remind your neural network to prevent catastrophic forgetting, 2020.
  17. Replay in deep learning: Current approaches and missing biological elements, 2021.
  18. Memory population in continual learning via outlier elimination, 2023.
  19. Patching open-vocabulary models by interpolating weights, 2022.
  20. A simple baseline that questions the use of pretrained-models in continual learning, 2023.
  21. Efficient task-specific data valuation for nearest neighbor algorithms, 2020.
  22. Supervised contrastive learning, 2021.
  23. Visual genome: Connecting language and vision using crowdsourced dense image annotations, 2016.
  24. Do pre-trained models benefit equally in continual learning?, 2022.
  25. Timothée Lesort. Continual feature selection: Spurious features in continual learning, 2022.
  26. Retrieval-augmented generation for knowledge-intensive nlp tasks, 2021.
  27. Elevater: A benchmark and toolkit for evaluating language-augmented visual models, 2022.
  28. The clear benchmark: Continual learning on real-world imagery, 2022.
  29. Supervised contrastive replay: Revisiting the nearest class mean classifier in online class-incremental continual learning, 2021.
  30. Simple open-vocabulary object detection with vision transformers, 2022.
  31. First session adaptation: A strong replay-free baseline for class-incremental learning, 2024.
  32. Learning transferable visual models from natural language supervision, 2021.
  33. Anatomy of catastrophic forgetting: Hidden representations and task semantics, 2020.
  34. Progressive neural networks, 2022.
  35. Error sensitivity modulation based experience replay: Mitigating abrupt representation drift in continual learning, 2023.
  36. Learning in deep neural networks and brains with similarity-weighted interleaved learning. Proceedings of the National Academy of Sciences, 119(27):e2115229119, 2022.
  37. Objects365: A large-scale, high-quality dataset for object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 8429–8438, 2019.
  38. Lloyd S. Shapley. A value for n-person games. Contributions to the Theory of Games, 2:307–317, 1953.
  39. Online class-incremental continual learning with adversarial shapley value, 2021.
  40. Continual learning with deep generative replay, 2017.
  41. A closer look at rehearsal-free continual learning, 2023.
  42. Gcr: Gradient coreset based replay buffer selection for continual learning, 2022.
  43. Gido M. van de Ven and Andreas S. Tolias. Three scenarios for continual learning, 2019.
  44. Brain-inspired replay for continual learning with artificial neural networks. Nature Communications, 11(1):4069, 2020.
  45. Rehearsal revealed: The limits and merits of revisiting samples in continual learning, 2021.
  46. Continual learning: Applications and the road forward, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Truman Hickok (5 papers)
  2. Dhireesha Kudithipudi (31 papers)