Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Classification Calibration for Long-tail Instance Segmentation (1910.13081v3)

Published 29 Oct 2019 in cs.CV

Abstract: Remarkable progress has been made in object instance detection and segmentation in recent years. However, existing state-of-the-art methods are mostly evaluated with fairly balanced and class-limited benchmarks, such as Microsoft COCO dataset [8]. In this report, we investigate the performance drop phenomenon of state-of-the-art two-stage instance segmentation models when processing extreme long-tail training data based on the LVIS [5] dataset, and find a major cause is the inaccurate classification of object proposals. Based on this observation, we propose to calibrate the prediction of classification head to improve recognition performance for the tail classes. Without much additional cost and modification of the detection model architecture, our calibration method improves the performance of the baseline by a large margin on the tail classes. Codes will be available. Importantly, after the submission, we find significant improvement can be further achieved by modifying the calibration head, which we will update later.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Tao Wang (700 papers)
  2. Yu Li (378 papers)
  3. Bingyi Kang (39 papers)
  4. Junnan Li (56 papers)
  5. Jun Hao Liew (29 papers)
  6. Sheng Tang (18 papers)
  7. Steven Hoi (38 papers)
  8. Jiashi Feng (295 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.