Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Class-Incremental Learning via Knowledge Amalgamation (2209.02112v1)

Published 5 Sep 2022 in cs.LG and cs.AI

Abstract: Catastrophic forgetting has been a significant problem hindering the deployment of deep learning algorithms in the continual learning setting. Numerous methods have been proposed to address the catastrophic forgetting problem where an agent loses its generalization power of old tasks while learning new tasks. We put forward an alternative strategy to handle the catastrophic forgetting with knowledge amalgamation (CFA), which learns a student network from multiple heterogeneous teacher models specializing in previous tasks and can be applied to current offline methods. The knowledge amalgamation process is carried out in a single-head manner with only a selected number of memorized samples and no annotations. The teachers and students do not need to share the same network structure, allowing heterogeneous tasks to be adapted to a compact or sparse data representation. We compare our method with competitive baselines from different strategies, demonstrating our approach's advantages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Marcus de Carvalho (7 papers)
  2. Mahardhika Pratama (59 papers)
  3. Jie Zhang (847 papers)
  4. Yajuan San (1 paper)
Citations (7)

Summary

We haven't generated a summary for this paper yet.