Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 91 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 29 tok/s
GPT-5 High 26 tok/s Pro
GPT-4o 98 tok/s
GPT OSS 120B 470 tok/s Pro
Kimi K2 216 tok/s Pro
2000 character limit reached

Dynamic Dual Buffer with Divide-and-Conquer Strategy for Online Continual Learning (2505.18101v1)

Published 23 May 2025 in cs.LG

Abstract: Online Continual Learning (OCL) presents a complex learning environment in which new data arrives in a batch-to-batch online format, and the risk of catastrophic forgetting can significantly impair model efficacy. In this study, we address OCL by introducing an innovative memory framework that incorporates a short-term memory system to retain dynamic information and a long-term memory system to archive enduring knowledge. Specifically, the long-term memory system comprises a collection of sub-memory buffers, each linked to a cluster prototype and designed to retain data samples from distinct categories. We propose a novel $K$-means-based sample selection method to identify cluster prototypes for each encountered category. To safeguard essential and critical samples, we introduce a novel memory optimisation strategy that selectively retains samples in the appropriate sub-memory buffer by evaluating each cluster prototype against incoming samples through an optimal transportation mechanism. This approach specifically promotes each sub-memory buffer to retain data samples that exhibit significant discrepancies from the corresponding cluster prototype, thereby ensuring the preservation of semantically rich information. In addition, we propose a novel Divide-and-Conquer (DAC) approach that formulates the memory updating as an optimisation problem and divides it into several subproblems. As a result, the proposed DAC approach can solve these subproblems separately and thus can significantly reduce computations of the proposed memory updating process. We conduct a series of experiments across standard and imbalanced learning settings, and the empirical findings indicate that the proposed memory framework achieves state-of-the-art performance in both learning contexts.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

Dynamic Dual Buffer with Divide-and-Conquer Strategy for Online Continual Learning

The paper "Dynamic Dual Buffer with Divide-and-Conquer Strategy for Online Continual Learning" addresses the challenge of catastrophic forgetting in Online Continual Learning (OCL), where data is introduced continuously in a batch-by-batch manner. The authors propose a novel memory management framework that incorporates both short-term and long-term memory systems, aiming to preserve critical information from the data stream while minimizing computational overhead. This paper explores the technical intricacies and evaluates the efficacy of the proposed method across standard and imbalanced learning settings.

Memory Framework and Methodology

The framework introduced by the authors is known as Online Dynamic Expandable Dual Memory (ODEDM). It integrates a lightweight short-term memory buffer for immediate data processing and a long-term memory buffer that uses cluster prototypes to sustain significant knowledge over time. In essence, the short-term buffer handles transient inputs efficiently, while the long-term buffer ensures the retention of semantically rich information.

One of the key innovations is the use of a KK-means-based sample selection method to identify and form cluster prototypes for each category encountered. The selected prototypes guide the allocation of samples into sub-memory buffers. To optimize the retention of critical samples, the authors employ a novel approach based on Sinkhorn distance, a computationally efficient variant of the Wasserstein distance, facilitating the evaluation of probability distributions with reduced computational cost relative to traditional optimal transport metrics.

Moreover, the paper introduces a Divide-and-Conquer (DAC) strategy which formulates memory updates as an optimization problem, subdividing it into manageable subproblems that can be resolved independently. This recursive clustering approach drastically minimizes computational demands, making ODEDM a scalable solution for OCL.

Empirical Evaluation

The effectiveness of ODEDM was assessed through a series of experiments using datasets such as CIFAR10, CIFAR100, and TINYIMG. Results consistently demonstrated that models employing ODEDM achieved state-of-the-art performance, particularly in memory-constrained scenarios and imbalanced learning settings. For instance, DER++ integrated with ODEDM showed substantial gains in Class-Incremental Learning (Class-IL) accuracy across all buffer sizes, underscoring ODEDM's capability to mitigate catastrophic forgetting.

A rigorous comparative analysis against benchmark algorithms like DER, DER++, FDR, and iCaRL further highlighted ODEDM's robustness. The paper documents significant improvements in both Class-IL and Task-incremental learning (Task-IL) contexts, showcasing the potential for long-term memory conservation in OCL.

Practical and Theoretical Implications

The dynamic memory allocation mechanism within ODEDM exemplifies a pragmatic advance in continual learning research, where memory resources are progressively reallocated from short-term to long-term buffers as learning progresses, enhancing the retention of enduring knowledge. This strategy not only addresses immediate learning needs but also adapts to the evolving data landscape.

Theoretically, the application of Sinkhorn distance and DAC introduces new perspectives on large-scale optimization in continual learning. These methodologies establish a foundation for future research focused on probabilistic measures and efficient data sampling techniques aimed at optimizing memory usage in neural networks.

Future Developments

Looking forward, the implications of the ODEDM framework extend beyond supervised learning scenarios. The paper suggests exploring unsupervised learning contexts, where dynamic buffer management strategies and efficient prototype clustering could yield transformative insights. Furthermore, the scalability of DAC in large-scale neural networks presents opportunities for expanding the application of ODEDM to diverse real-world environments requiring adaptable and resilient learning models.

In conclusion, the paper showcases a compelling approach by leveraging a dual memory buffer strategy coupled with advanced optimization techniques. This work contributes meaningful advances to the field of continual learning, opening avenues for continued innovation in efficiently training models within resource-constrained environments.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.