Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference (2110.10031v2)

Published 19 Oct 2021 in cs.LG and cs.CV

Abstract: Despite rapid advances in continual learning, a large body of research is devoted to improving performance in the existing setups. While a handful of work do propose new continual learning setups, they still lack practicality in certain aspects. For better practicality, we first propose a novel continual learning setup that is online, task-free, class-incremental, of blurry task boundaries and subject to inference queries at any moment. We additionally propose a new metric to better measure the performance of the continual learning methods subject to inference queries at any moment. To address the challenging setup and evaluation protocol, we propose an effective method that employs a new memory management scheme and novel learning techniques. Our empirical validation demonstrates that the proposed method outperforms prior arts by large margins. Code and data splits are available at https://github.com/naver-ai/i-Blurry.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hyunseo Koh (4 papers)
  2. Dahyun Kim (21 papers)
  3. Jung-Woo Ha (67 papers)
  4. Jonghyun Choi (50 papers)
Citations (56)