Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Regularize, Expand and Compress: Multi-task based Lifelong Learning via NonExpansive AutoML (1903.08362v1)

Published 20 Mar 2019 in cs.CV and cs.LG

Abstract: Lifelong learning, the problem of continual learning where tasks arrive in sequence, has been lately attracting more attention in the computer vision community. The aim of lifelong learning is to develop a system that can learn new tasks while maintaining the performance on the previously learned tasks. However, there are two obstacles for lifelong learning of deep neural networks: catastrophic forgetting and capacity limitation. To solve the above issues, inspired by the recent breakthroughs in automatically learning good neural network architectures, we develop a Multi-task based lifelong learning via nonexpansive AutoML framework termed Regularize, Expand and Compress (REC). REC is composed of three stages: 1) continually learns the sequential tasks without the learned tasks' data via a newly proposed multi-task weight consolidation (MWC) algorithm; 2) expands the network to help the lifelong learning with potentially improved model capability and performance by network-transformation based AutoML; 3) compresses the expanded model after learning every new task to maintain model efficiency and performance. The proposed MWC and REC algorithms achieve superior performance over other lifelong learning algorithms on four different datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jie Zhang (846 papers)
  2. Junting Zhang (11 papers)
  3. Shalini Ghosh (34 papers)
  4. Dawei Li (75 papers)
  5. Jingwen Zhu (16 papers)
  6. Heming Zhang (13 papers)
  7. Yalin Wang (72 papers)
Citations (2)