Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Task Incremental Learning for Object Detection (2002.05347v3)

Published 13 Feb 2020 in cs.CV

Abstract: Multi-task learns multiple tasks, while sharing knowledge and computation among them. However, it suffers from catastrophic forgetting of previous knowledge when learned incrementally without access to the old data. Most existing object detectors are domain-specific and static, while some are learned incrementally but only within a single domain. Training an object detector incrementally across various domains has rarely been explored. In this work, we propose three incremental learning scenarios across various domains and categories for object detection. To mitigate catastrophic forgetting, attentive feature distillation is proposed to leverages both bottom-up and top-down attentions to extract important information for distillation. We then systematically analyze the proposed distillation method in different scenarios. We find out that, contrary to common understanding, domain gaps have smaller negative impact on incremental detection, while category differences are problematic. For the difficult cases, where the domain gaps and especially category differences are large, we explore three different exemplar sampling methods and show the proposed adaptive sampling method is effective to select diverse and informative samples from entire datasets, to further prevent forgetting. Experimental results show that we achieve the significant improvement in three different scenarios across seven object detection benchmark datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xialei Liu (35 papers)
  2. Hao Yang (328 papers)
  3. Avinash Ravichandran (35 papers)
  4. Rahul Bhotika (13 papers)
  5. Stefano Soatto (179 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.