Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Continual Learning Using Multi-view Task Conditional Neural Networks (2005.05080v3)

Published 8 May 2020 in cs.LG and stat.ML

Abstract: Conventional deep learning models have limited capacity in learning multiple tasks sequentially. The issue of forgetting the previously learned tasks in continual learning is known as catastrophic forgetting or interference. When the input data or the goal of learning change, a continual model will learn and adapt to the new status. However, the model will not remember or recognise any revisits to the previous states. This causes performance reduction and re-training curves in dealing with periodic or irregularly reoccurring changes in the data or goals. The changes in goals or data are referred to as new tasks in a continual learning model. Most of the continual learning methods have a task-known setup in which the task identities are known in advance to the learning model. We propose Multi-view Task Conditional Neural Networks (Mv-TCNN) that does not require to known the reoccurring tasks in advance. We evaluate our model on standard datasets using MNIST, CIFAR10, CIFAR100, and also a real-world dataset that we have collected in a remote healthcare monitoring study (i.e. TIHM dataset). The proposed model outperforms the state-of-the-art solutions in continual learning and adapting to new tasks that are not defined in advance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Honglin Li (32 papers)
  2. Payam Barnaghi (31 papers)
  3. Shirin Enshaeifar (7 papers)
  4. Frieder Ganz (6 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.