Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Continual Learning on a Contaminated Data Stream with Blurry Task Boundaries (2203.15355v2)

Published 29 Mar 2022 in cs.CV, cs.AI, and cs.LG

Abstract: Learning under a continuously changing data distribution with incorrect labels is a desirable real-world problem yet challenging. A large body of continual learning (CL) methods, however, assumes data streams with clean labels, and online learning scenarios under noisy data streams are yet underexplored. We consider a more practical CL task setup of an online learning from blurry data stream with corrupted labels, where existing CL methods struggle. To address the task, we first argue the importance of both diversity and purity of examples in the episodic memory of continual learning models. To balance diversity and purity in the episodic memory, we propose a novel strategy to manage and use the memory by a unified approach of label noise aware diverse sampling and robust learning with semi-supervised learning. Our empirical validations on four real-world or synthetic noise datasets (CIFAR10 and 100, mini-WebVision, and Food-101N) exhibit that our method significantly outperforms prior arts in this realistic and challenging continual learning scenario. Code and data splits are available in https://github.com/clovaai/puridiver.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jihwan Bang (14 papers)
  2. Hyunseo Koh (4 papers)
  3. Seulki Park (7 papers)
  4. Hwanjun Song (44 papers)
  5. Jung-Woo Ha (67 papers)
  6. Jonghyun Choi (50 papers)
Citations (32)
Github Logo Streamline Icon: https://streamlinehq.com