Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SID: Incremental Learning for Anchor-Free Object Detection via Selective and Inter-Related Distillation (2012.15439v1)

Published 31 Dec 2020 in cs.CV

Abstract: Incremental learning requires a model to continually learn new tasks from streaming data. However, traditional fine-tuning of a well-trained deep neural network on a new task will dramatically degrade performance on the old task -- a problem known as catastrophic forgetting. In this paper, we address this issue in the context of anchor-free object detection, which is a new trend in computer vision as it is simple, fast, and flexible. Simply adapting current incremental learning strategies fails on these anchor-free detectors due to lack of consideration of their specific model structures. To deal with the challenges of incremental learning on anchor-free object detectors, we propose a novel incremental learning paradigm called Selective and Inter-related Distillation (SID). In addition, a novel evaluation metric is proposed to better assess the performance of detectors under incremental learning conditions. By selective distilling at the proper locations and further transferring additional instance relation knowledge, our method demonstrates significant advantages on the benchmark datasets PASCAL VOC and COCO.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Can Peng (14 papers)
  2. Kun Zhao (97 papers)
  3. Sam Maksoud (5 papers)
  4. Meng Li (244 papers)
  5. Brian C. Lovell (41 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.