Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Continual Learning with Bayesian Model based on a Fixed Pre-trained Feature Extractor (2204.13349v1)

Published 28 Apr 2022 in cs.LG and cs.CV

Abstract: Deep learning has shown its human-level performance in various applications. However, current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes. This poses a challenge particularly in intelligent diagnosis systems where initially only training data of a limited number of diseases are available. In this case, updating the intelligent system with data of new diseases would inevitably downgrade its performance on previously learned diseases. Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning built on a fixed pre-trained feature extractor. In this model, knowledge of each old class can be compactly represented by a collection of statistical distributions, e.g. with Gaussian mixture models, and naturally kept from forgetting in continual learning over time. Unlike existing class-incremental learning methods, the proposed approach is not sensitive to the continual learning process and can be additionally well applied to the data-incremental learning scenario. Experiments on multiple medical and natural image classification tasks showed that the proposed approach outperforms state-of-the-art approaches which even keep some images of old classes during continual learning of new classes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yang Yang (884 papers)
  2. Zhiying Cui (3 papers)
  3. Junjie Xu (23 papers)
  4. Changhong Zhong (3 papers)
  5. Wei-Shi Zheng (148 papers)
  6. Ruixuan Wang (36 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.