Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

vCLIMB: A Novel Video Class Incremental Learning Benchmark (2201.09381v2)

Published 23 Jan 2022 in cs.CV

Abstract: Continual learning (CL) is under-explored in the video domain. The few existing works contain splits with imbalanced class distributions over the tasks, or study the problem in unsuitable datasets. We introduce vCLIMB, a novel video continual learning benchmark. vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning. In contrast to previous work, we focus on class incremental continual learning with models trained on a sequence of disjoint tasks, and distribute the number of classes uniformly across the tasks. We perform in-depth evaluations of existing CL methods in vCLIMB, and observe two unique challenges in video data. The selection of instances to store in episodic memory is performed at the frame level. Second, untrimmed training data influences the effectiveness of frame sampling strategies. We address these two challenges by proposing a temporal consistency regularization that can be applied on top of memory-based continual learning methods. Our approach significantly improves the baseline, by up to 24% on the untrimmed continual learning task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Andrés Villa (9 papers)
  2. Kumail Alhamoud (8 papers)
  3. Juan León Alcázar (13 papers)
  4. Fabian Caba Heilbron (34 papers)
  5. Victor Escorcia (13 papers)
  6. Bernard Ghanem (256 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.