Papers
Topics
Authors
Recent
2000 character limit reached

Streaming Continual Learning Overview

Updated 15 December 2025
  • Streaming Continual Learning is a computational paradigm that updates models from non-i.i.d. data streams in a single pass while operating under strict memory constraints.
  • It balances rapid adaptation (plasticity) with knowledge retention (stability) using techniques like replay buffers, regularization, coreset selection, and modular architectures.
  • SCL methods are applicable to diverse scenarios such as robotics, streaming vision, and IoT, where efficient online inference and mitigation of catastrophic forgetting are critical.

Streaming Continual Learning (SCL) is a computational paradigm that addresses the challenge of incrementally adapting machine learning models to non-stationary data streams, under constraints of single-pass access, bounded memory, and immediate inference. SCL is central to scenarios where data distributions evolve dynamically, past knowledge must be retained without catastrophic forgetting, and resource limitations demand efficient, scalable methods.

1. Problem Formulation and Core Metrics

Streaming Continual Learning operates on a potentially infinite, non-i.i.d. data stream {(xt,yt)}t=1T\{(x_t, y_t)\}_{t=1}^T where each (xt,yt)(x_t, y_t) is seen once, and model updates must be performed online. The objectives are to maximize predictive performance on both new and previously encountered concepts (plasticity and stability), given strict memory constraints (e.g., a replay buffer, coreset, or sketch of fixed size BB).

Evaluation in SCL encompasses both plasticity (prequential or online accuracy on the current distribution) and stability (retention or minimization of forgetting for past knowledge). Typical stability metrics include average forgetting Fj=maxtCjaccj(t)mint>tCjaccj(t)F_j = \max_{t \in C_j} \text{acc}_j(t) - \min_{t' > t \in C_j} \text{acc}_j(t') over concepts CjC_j, and corresponding global stability S=1(1/C)jFjS = 1 - (1/|C|) \sum_j F_j (Lourenço et al., 12 Dec 2025). Final performance is often normalized by the offline upper bound (ideal batch learning) as in Qall=1Tt=1TPtPtofflineQ_\mathrm{all} = \frac{1}{T} \sum_{t=1}^T \frac{P_t}{P^\mathrm{offline}_t} (Khawand et al., 2023).

2. Algorithmic Families and Architectural Approaches

SCL algorithms can be categorized by their treatment of plasticity–stability trade-off and the nature of memory and adaptation mechanisms. The dominant families include:

Replay-based methods: Maintain a compact replay buffer Mt\mathcal{M}_t of past samples and update the model by interleaving current and replayed data. Key strategies include uniform sampling (Experience Replay (Soutif--Cormerais et al., 2023)), class-balanced reservoir sampling (CBRS), and coverage-maximizing memory (Memento (Dietmüller et al., 16 May 2024)). Vanilla ER is an exceptionally strong baseline when properly tuned (Soutif--Cormerais et al., 2023).

Regularization-based methods: Apply explicit penalties to limit parameter drift, typified by Elastic Weight Consolidation (EWC) and similar Fisher information–aware constraints (Bartoli et al., 8 Nov 2024, Wang et al., 2020, Cossu et al., 2021). EWC is especially prevalent in SCL for vision and graph neural networks.

Coreset and prototype selection: Construct a synopsis of the data stream—e.g., by bilevel optimization (Borsos et al., 2020), coreset compression (Lourenço et al., 12 Dec 2025), or class-prototype storage (Wang et al., 2022, Khawand et al., 2023)—used as the basis for replay or direct analytic updates (such as discriminant analysis heads).

Functional and architectural modularity: Leverages parameter isolation or modular growth such as prompt-based adaptation (PROL (Ma'sum et al., 16 Jul 2025)), dynamic transformer memories (Savadikar et al., 2023), or analytic heads (SRDA (Khawand et al., 2023), SCROLL (Wang et al., 2022)) atop frozen or pre-trained feature extractors. Streaming learning in Echo State Networks exploits a fixed reservoir with fast-readout updates (Cossu et al., 2021).

Generative replay: Attempts to overcome memory constraints and privacy issues by synthesizing pseudo-samples from a generative model (e.g., diffusion-based (He et al., 22 Jun 2024)), with knowledge distillation between generator versions to anchor past-task fidelity.

Label-efficient and semi-supervised streaming: Incorporates unlabeled data via pseudo-labeling and consistency regularization (e.g., Efficient-CLS (Wu et al., 2022), CIC (Boschini et al., 2021)), essential for annotation-bounded streams such as robotic perception or live-video object detection.

Memory-free approaches: Employ functional regularizers, prompt-based adaptation, or self-evolving deep clustering (ADCN (Ashfahani et al., 2021), PROL (Ma'sum et al., 16 Jul 2025)) to avoid any storage of raw past samples, at the expense of limited plasticity or increased model complexity.

3. Online Update Mechanisms and Sample Selection

SCL methodologies are defined by stringent online processing constraints:

An archetypal online SCL loop, omitting task-ID information, is:

1
2
3
4
5
6
7
for t in 1...T:
    receive (x_t, y_t)
    M_t = update_buffer(M_{t-1}, (x_t, y_t))  # e.g., reservoir, coreset, Memento
    sample B_mem from M_t
    L = loss(f_t((x_t, y_t)), B_mem)
    f_{t+1} = SGD_step(f_t, L)
    output prediction ŷ_t for x_t

Replay-based methods generally blend the loss over new/replayed samples (and may include distillation or contrastive terms), while buffer management techniques (e.g., coverage-based eviction or class-exemplar strategies) are crucial to maximizing utility under fixed M|M| (Dietmüller et al., 16 May 2024, Soutif--Cormerais et al., 2023).

4. Plasticity–Stability Trade-offs and Theoretical Insights

All SCL approaches must navigate the inherent tension between rapid adaptation (plasticity) and retention (stability). Mechanisms include:

These designs enable SCL systems to operate with constant or sublinear memory costs, process data streams online, and maintain bounded forgetting, but often exhibit trade-offs between rapid adaptation to new drifts and long-term knowledge retention (He et al., 22 Jun 2024, Bartoli et al., 8 Nov 2024).

5. Practical Instantiations and Domain Extensions

SCL is realized in various neural and statistical architectures, with representative instantiations including:

  • Streaming GNNs: EWC- and replay-augmented message-passing architectures (Bartoli et al., 8 Nov 2024, Wang et al., 2020).
  • Self-evolving clustering: ADCN enables fully unsupervised SCL in image/structured modalities (Ashfahani et al., 2021), automatically growing/shrinking both width and depth of embedding layers in response to stream drift.
  • Discriminant head models: Regularized streaming QDA (SRDA) and analytic heads on deep frozen backbones deliver strong continual performance in high-class-count settings (Khawand et al., 2023), with analytic update cost O(Cd2)O(Cd^2).
  • Diffusion- and GAN-based generative replay: Successive generator distillation addresses the “replay quality degradation” associated with classical generative replay (He et al., 22 Jun 2024).
  • Advanced sample selection: Distributional coverage metrics (Jensen–Shannon divergence of output/label histograms over memory batches), batch-based eviction, and retraining triggers based on relative coverage increase (Dietmüller et al., 16 May 2024).
  • Prompt-based approaches: Single, lightweight prompt generators with per-class adaptation (PROL) maintain streaming adaptability without rehearsal or parameter explosion (Ma'sum et al., 16 Jul 2025). Modular design ensures sublinear growth in model size with streamed classes.
  • Cold start / base initialization–free learning: Approaches such as CSSL (Wolfe et al., 2022) train end-to-end without a batch pretraining phase, demonstrating convergence and calibration even under random initialization.

6. Methodological Comparisons, Metrics, and Limitations

Benchmarks in SCL consider both synthetic and real-world data with varying forms of distributional drift (abrupt, incremental, recurring), across application domains such as traffic forecasting, robotics, industrial IoT, large-scale image classification, video streaming, and simulation-based surrogate modeling (Bartoli et al., 8 Nov 2024, He et al., 22 Jun 2024, Stiller et al., 2022). Performance metrics include:

  • Average/Final accuracy: On-the-fly and end-of-stream evaluation, normalized by offline upper bounds (Soutif--Cormerais et al., 2023, Khawand et al., 2023).
  • Forgetting: Drop from maximum to final per-task/class accuracy.
  • Probed accuracy: Linear classifier performance on frozen features after streaming.
  • Memory and compute efficiency: Per-step (time and capacity), throughput, and inference costs (Ma'sum et al., 16 Jul 2025, Borsos et al., 2020).
  • Schedule-robustness: Invariance of final model performance to the order, batching, or scheduling of streamed data (Wang et al., 2022).

Empirical studies consistently show that under proper tuning, generic replay-based approaches remain highly competitive and more complex regularization or modularity strategies excel primarily in specialized or privacy-critical regimes (Soutif--Cormerais et al., 2023, Ma'sum et al., 16 Jul 2025). Functional adaptation (SRDA, SCROLL) and in-context SCL (with LTMs) offer compelling trade-offs in resource-limited or non-parametric settings (Lourenço et al., 12 Dec 2025).

Limitations include unbounded long-term forgetting if memory/prioritization is inadequate, model size growth in modular approaches, and lack of theoretical guarantees outside restricted settings (e.g., fixed backbone, overparameterized regime). Future directions call for general-purpose, adaptive memory policies, schedule-robust online adaptation, and principled integration of in-context and parametric learning (Wang et al., 2022, Lourenço et al., 12 Dec 2025).

7. SCL in Context: Systems, Applications, and Future Directions

SCL methods are now foundational in continual perception and control for embodied agents (STREAK (Bartoli et al., 8 Nov 2024)), streaming sensor analytics (IIoT/DSG (He et al., 22 Jun 2024)), large-scale vision (SRDA/class-incremental Imagenet (Khawand et al., 2023)), streaming object detection under label efficiency (Efficient-CLS (Wu et al., 2022)), network sample selection (Memento (Dietmüller et al., 16 May 2024)), cold-start learning (Wolfe et al., 2022), and exascale simulation coupled model-learning (Stiller et al., 2022). The field is characterized by:

  • Unifying the historically distinct continual learning (CL) and stream learning (SL) communities (Lourenço et al., 12 Dec 2025).
  • Focusing on data selection and compact memory design as key drivers of performance.
  • Emphasizing algorithmic robustness to arrival schedule, memory-restriction, and privacy policies.
  • Recognizing replay and modular adaptation as complementary techniques for managing stability–plasticity.
  • Scaling SCL to multi-domain, semi-supervised, unsupervised, or memoryless regimes via modularity, latent-regularization, and in-context architectures.

Persistent open questions include the design of optimal memory organization for arbitrary data streams, scalable in-context learning strategies for high-dimensional/long-context models, and the search for universally schedule-robust, efficient SCL algorithms. The integration of meta-learning, task-agnostic drift detection, and hybrid generative–parametric models presents a frontier for further research and deployment.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Streaming Continual Learning (SCL).