Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax (2006.10408v1)

Published 18 Jun 2020 in cs.CV, cs.LG, and stat.ML

Abstract: Solving long-tail large vocabulary object detection with deep learning based models is a challenging and demanding task, which is however under-explored.In this work, we provide the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution. We find existing detection methods are unable to model few-shot classes when the dataset is extremely skewed, which can result in classifier imbalance in terms of parameter magnitude. Directly adapting long-tail classification models to detection frameworks can not solve this problem due to the intrinsic difference between detection and classification.In this work, we propose a novel balanced group softmax (BAGS) module for balancing the classifiers within the detection frameworks through group-wise training. It implicitly modulates the training process for the head and tail classes and ensures they are both sufficiently trained, without requiring any extra sampling for the instances from the tail classes.Extensive experiments on the very recent long-tail large vocabulary object recognition benchmark LVIS show that our proposed BAGS significantly improves the performance of detectors with various backbones and frameworks on both object detection and instance segmentation. It beats all state-of-the-art methods transferred from long-tail image classification and establishes new state-of-the-art.Code is available at https://github.com/FishYuLi/BalancedGroupSoftmax.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yu Li (378 papers)
  2. Tao Wang (700 papers)
  3. Bingyi Kang (39 papers)
  4. Sheng Tang (18 papers)
  5. Chunfeng Wang (6 papers)
  6. Jintao Li (44 papers)
  7. Jiashi Feng (295 papers)
Citations (251)

Summary

Balanced Group Softmax for Long-Tail Object Detection

The paper "Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax" addresses the performance degradation of object detection frameworks in dealing with long-tail distributions. This is a prevalent issue where few classes dominate the dataset, resulting in these classes having a disproportionately high number of instances, whereas tail classes have significantly fewer instances.

Analysis of the Problem

Current models often struggle with the imbalance problem inherent in long-tail datasets such as LVIS, where the number of instances for each category varies drastically. Here, the underperformance is primarily attributed to the imbalanced classifier weights within detection frameworks. Traditional methods fail to extend adequately to long-tail distribution scenarios because these methods do not sufficiently model the few-shot classes prevalent in such distributions. This imbalance in classifier weight magnitude unfairly biases the model towards well-represented classes, leading to poor detection and segmentation performance for less-represented classes.

Proposed Solution: Balanced Group Softmax

To tackle this, the paper proposes the Balanced Group Softmax (BAGS) module, which effectively integrates into existing detection frameworks. The core idea is to segregate categories into multiple groups based on their training instance counts, thereby ensuring that categories with fewer instances do not get overshadowed by those with many instances during classifier weight update. This grouped approach allows each subset of categories to be trained independently, thereby preserving balance through differential optimization.

Numerical Results and Validation

Extensive experimentation was conducted using the LVIS dataset, a large-scale dataset with a pronounced long-tail distribution across 1230 categories. The proposed BAGS module demonstrated significant improvements in both object detection and instance segmentation tasks, achieving a noteworthy increase in mean Average Precision (mAP) across all categories, particularly offering considerable improvements in rare and uncommon category detection. Specifically, the methodology rendered a 9% to 19% improvement over tail classes and approximately a 3% to 6% increase in mAP overall, outperforming state-of-the-art methods derived from long-tail classification paradigms.

Implications and Future Directions

This research holds significant implications for the deployment of deep learning models in real-world applications where data are inherently imbalanced. By facilitating a more nuanced approach to object detection that accounts for the skewed distribution of instances, this paper contributes a practical module to improve the applicability and robustness of detection frameworks.

Future developments can explore the scalability of BAGS in more complex detection frameworks or how it might be integrated with dynamic receptive field models to handle unseen data. Research can also extend into optimizing the hyperparameters and group delineations of the BAGS module, potentially further enhancing its efficacy across varied datasets with diverse imbalance characteristics.

In summary, the balanced group softmax represents a substantial methodological advancement in long-tail object detection and segmentation contexts. It strategically addresses classifier imbalance without resorting to resource-intensive data re-sampling or intricate cost-sensitive learning strategies, thus providing a robust solution in scenarios characterized by long-tailed data distributions.