Papers
Topics
Authors
Recent
2000 character limit reached

Learning Without Forgetting (LwF)

Updated 13 October 2025
  • Learning Without Forgetting (LwF) is a continual learning paradigm that uses knowledge distillation to mitigate catastrophic forgetting when updating neural networks with new tasks.
  • It balances new-task learning and old-task retention by jointly optimizing a classification loss and a distillation loss without accessing past data.
  • LwF is especially beneficial in privacy-sensitive and storage-constrained environments, offering efficient incremental updates for various applications.

Learning Without Forgetting (LwF) is a continual learning paradigm designed to address the challenge of updating neural networks with new tasks or classes, while mitigating catastrophic forgetting when prior task data is unavailable. LwF leverages knowledge distillation to preserve the functionality acquired from previous tasks as new information is incorporated, without requiring the storage or access to raw data from earlier tasks.

1. Problem Setting and Core Principles

LwF targets settings where models (typically convolutional neural networks) require incremental training—new tasks or classes are introduced sequentially, but the original task data is not retained, either due to storage limitations, privacy concerns, or practicalities of data access. Traditional approaches such as joint retraining or even partial fine-tuning risk erasing prior knowledge because neural network parameters are globally updated, leading to catastrophic forgetting.

The core idea of LwF is to treat the original, pretrained model as a “teacher” that provides soft output predictions (responses) on new-task images, even though these images may not represent the old tasks accurately. The updated network (the “student”) is then jointly trained to both solve the new task and preserve the response behavior of the old network on these new-data inputs. This dual-loss training procedure uses only data from the new task, circumventing the need to access the old task’s raw data.

2. Technical Methodology

Let f(x;θs,θo)f(x; \theta_s, \theta_o) denote a pretrained network, with shared parameters θs\theta_s and old task–specific parameters θo\theta_o. When a new task with data (Xn,Yn)(X_n, Y_n) is added, new task parameters θn\theta_n are introduced.

Stepwise Process:

  • Recording Old Outputs: Compute Yo=f(Xn;θs,θo)Y_o = f(X_n; \theta_s, \theta_o), i.e., obtain old-task predictions on new-task inputs.
  • Joint Loss Function: Optimize all parameters to minimize

Ltotal=λoLold(Yo,Y^o)+Lnew(Yn,Y^n)+R(θ)L_{\text{total}} = \lambda_o \cdot \mathcal{L}_{\text{old}}(Y_o, \hat{Y}_o) + \mathcal{L}_{\text{new}}(Y_n, \hat{Y}_n) + \mathcal{R}(\theta)

where - Lold\mathcal{L}_{\text{old}} is a distillation (softened cross-entropy) loss:

Lold(Yo,Y^o)=iyo(i)log(y^o(i))\mathcal{L}_{\text{old}}(Y_o, \hat{Y}_o) = -\sum_i y'_o{}^{(i)} \log(\hat{y}'_o{}^{(i)})

with softmax “temperature” TT applied so that:

yo(i)=(yo(i))1/Tj(yo(j))1/Ty'_o{}^{(i)} = \frac{(y_o{}^{(i)})^{1/T}}{\sum_j (y_o{}^{(j)})^{1/T}}

(and similarly for y^o(i)\hat{y}'_o{}^{(i)}).

  • Lnew\mathcal{L}_{\text{new}} is a standard cross-entropy (classification) loss for the new task.
  • λo\lambda_o balances the old and new tasks; R\mathcal{R} is a weight decay regularization.

Training Stages:

  • Warm-up: Freeze shared and old-task parameters; train only new task–specific parameters.
  • Joint Optimization: Unfreeze all parameters; jointly optimize to balance learning new tasks and preserving previous output behavior.

This approach prevents the network from drifting too far on the old tasks as it learns new discriminative features, without explicitly retraining on legacy samples.

3. Comparative Evaluation: LwF vs. Other Approaches

LwF is empirically contrasted with several baselines:

Method Data Requirement Old Task Retention New Task Adaptation Efficiency
Joint Training All data, all tasks Strong Strong Low (slow)
Fine-Tuning New-task data only Poor (catastrophic forgetting) Strong High
Feature Extraction New-task data only Perfect (on old tasks) Weak High
LwF New-task data only, old-task outputs on new-task data Good Good High

Unlike joint retraining, LwF circumvents the need to alternate between old and new data; compared to fine-tuning, it substantially reduces performance degradation on old tasks, and compared to feature extraction, it allows the shared representation to adapt without severing ties to old knowledge.

Empirically, LwF is shown to match or surpass joint training on new tasks in some cases, outperform simple fine-tuning in both old and new task accuracy, and to maintain old task accuracy much more robustly. Training and inference costs are only marginally above standard fine-tuning.

4. Loss Function and Training Dynamics

The distillation component of LwF’s loss ensures that even as model parameters shift to accommodate new tasks, the responses of the network for legacy tasks (as estimated by the current new-task images) remain close to those of the pretrained model. Soft targets (with temperature smoothing, typically T=2T=2) emphasize distributional information in the output logits—helping to preserve fine-grained output structure and regularizing the adaptation process.

The sequence is:

  • For each batch of new-task data, compute both new-task loss and old-task response-preservation loss.
  • Optimize with respect to all network parameters, controlled by λo\lambda_o to manage stability–plasticity tradeoff.

The approach is not reliant on network duplication or explicit task-specific heads (apart from the minimal output heads required for new classes).

5. Empirical Results and Benchmark Analysis

Performance was evaluated using mean Average Precision (mAP) and classification accuracy across scenarios such as PASCAL VOC (for object and scene recognition) and fine-grained classification datasets. Baseline comparisons reveal:

  • New Task Performance: LwF matches or exceeds multitask learning (when legacy data is unavailable).
  • Old Task Retention: Substantial preservation of old task accuracy, significantly above that of naïve fine-tuning.
  • Training Efficiency: Speed similar to fine-tuning; memory/storage does not grow with the number of tasks.

A notable observation is that in some transfer scenarios, LwF can even regularize the model to improve generalization on the new task, suggesting that output preservation loss adds a beneficial constraint.

6. Applications and Extensions

LwF is especially applicable to sequentially upgraded vision systems, robotics, safety-critical applications, and edge devices where revisiting old data is burdensome or infeasible. Typical use cases include:

  • Incremental object or scene recognition in unified vision systems.
  • Knowledge expansion in proprietary or privacy-constrained environments.
  • Mobile deployment where model updates must avoid bloating or retraining from scratch.

Extensions to other tasks have been proposed, including semantic segmentation, object detection, reinforcement learning, and even natural language processing. The paradigm is flexible enough to be adapted to settings where the new data distribution is not representative of the old one, though output preservation in highly misaligned scenarios poses challenges.

7. Research Directions and Limitations

Open questions and avenues suggested include:

  • Application to continual and online learning with streaming data, potentially under domain shift or class imbalance.
  • Assessment of output-preserving strategies when old and new domains diverge sharply.
  • Incorporation of small unlabeled sets or memory samples as additional regularization.
  • Theoretical analysis relating the preservation of responses on new-task data to actual retention of old-task accuracy, especially when new-task data does not cover legacy classes comprehensively.

Limitations include possible degradation if new-task data does not sufficiently support the output diversity needed to constrain all old tasks. The reliance on response matching, rather than access to old data, can also render old task accuracy sensitive to the overlap between new and old data distributions.


Learning Without Forgetting adopts a response-preserving distillation regime to prevent catastrophic forgetting when training on new tasks in the absence of old-task data. By enforcing output consistency through knowledge distillation—without explicit data storage—LwF achieves a practical balance of accuracy and efficiency in incremental learning, and is foundational for subsequent advances in continual learning under strict data access constraints.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Learning Without Forgetting (LwF).