Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

3rd Continual Learning Workshop Challenge on Egocentric Category and Instance Level Object Understanding (2212.06833v1)

Published 13 Dec 2022 in cs.CV, cs.AI, and cs.LG

Abstract: Continual Learning, also known as Lifelong or Incremental Learning, has recently gained renewed interest among the Artificial Intelligence research community. Recent research efforts have quickly led to the design of novel algorithms able to reduce the impact of the catastrophic forgetting phenomenon in deep neural networks. Due to this surge of interest in the field, many competitions have been held in recent years, as they are an excellent opportunity to stimulate research in promising directions. This paper summarizes the ideas, design choices, rules, and results of the challenge held at the 3rd Continual Learning in Computer Vision (CLVision) Workshop at CVPR 2022. The focus of this competition is the complex continual object detection task, which is still underexplored in literature compared to classification tasks. The challenge is based on the challenge version of the novel EgoObjects dataset, a large-scale egocentric object dataset explicitly designed to benchmark continual learning algorithms for egocentric category-/instance-level object understanding, which covers more than 1k unique main objects and 250+ categories in around 100k video frames.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Lorenzo Pellegrini (18 papers)
  2. Chenchen Zhu (26 papers)
  3. Fanyi Xiao (25 papers)
  4. Zhicheng Yan (26 papers)
  5. Antonio Carta (29 papers)
  6. Matthias De Lange (12 papers)
  7. Vincenzo Lomonaco (58 papers)
  8. Roshan Sumbaly (9 papers)
  9. David Vazquez (73 papers)
  10. Pau Rodriguez (35 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.