Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Oriented Object Detection in Aerial Images with Box Boundary-Aware Vectors (2008.07043v2)

Published 17 Aug 2020 in cs.CV

Abstract: Oriented object detection in aerial images is a challenging task as the objects in aerial images are displayed in arbitrary directions and are usually densely packed. Current oriented object detection methods mainly rely on two-stage anchor-based detectors. However, the anchor-based detectors typically suffer from a severe imbalance issue between the positive and negative anchor boxes. To address this issue, in this work we extend the horizontal keypoint-based object detector to the oriented object detection task. In particular, we first detect the center keypoints of the objects, based on which we then regress the box boundary-aware vectors (BBAVectors) to capture the oriented bounding boxes. The box boundary-aware vectors are distributed in the four quadrants of a Cartesian coordinate system for all arbitrarily oriented objects. To relieve the difficulty of learning the vectors in the corner cases, we further classify the oriented bounding boxes into horizontal and rotational bounding boxes. In the experiment, we show that learning the box boundary-aware vectors is superior to directly predicting the width, height, and angle of an oriented bounding box, as adopted in the baseline method. Besides, the proposed method competes favorably with state-of-the-art methods. Code is available at https://github.com/yijingru/BBAVectors-Oriented-Object-Detection.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jingru Yi (14 papers)
  2. Pengxiang Wu (21 papers)
  3. Bo Liu (484 papers)
  4. Qiaoying Huang (14 papers)
  5. Hui Qu (19 papers)
  6. Dimitris Metaxas (85 papers)
Citations (221)

Summary

  • The paper introduces an anchor-free, single-stage detection method that leverages Box Boundary-Aware Vectors to improve robustness in detecting arbitrarily oriented objects.
  • It employs a U-Net-like backbone with ResNet101, generating heatmaps and offsets to precisely localize object centers in aerial imagery.
  • Experimental evaluations on DOTA and HRSC2016 datasets demonstrate significantly higher mAP and real-time performance for complex aerial surveillance tasks.

Overview of "Oriented Object Detection in Aerial Images with Box Boundary-Aware Vectors"

This paper addresses the challenge of detecting arbitrarily oriented objects in aerial images, a task complicated by dense object packing and diverse orientations. Traditional methods primarily rely on two-stage anchor-based detectors, which face challenges of positive-negative anchor box imbalance. This paper proposes a novel approach that extends keypoint-based object detectors for this task, offering an anchor-free, single-stage solution that promises improvements in learning efficiency and computational cost.

Methodology

The authors introduce the concept of Box Boundary-Aware Vectors (BBAVectors). Instead of predicting traditional oriented bounding box parameters such as width, height, and angle, the model focuses on learning vectors that define the boundaries of an oriented object. These vectors are positioned within the quadrants of a Cartesian coordinate system, allowing for more consistent and shared feature learning across varying orientations.

A core innovation in this paper is the classification of bounding boxes into horizontal and rotational categories to handle edge cases where object vectors align closely with coordinate axes. This strategy enhances the network's ability to discern and detect objects, especially in complex scenarios where minor angular variations can lead to significant localization errors.

The architecture is built on a U-Net-like backbone, utilizing ResNet101 for feature extraction. The network outputs include a heatmap for object center detection, an offset for precise point localization, and orientation maps to predict bounding box categories.

Experimental Evaluation

The proposed method was evaluated on the DOTA and HRSC2016 datasets, which provide challenging testbeds with varying scales, shapes, and orientations in aerial imagery. Experimental results demonstrate that the proposed BBAVectors approach significantly outperforms traditional anchor-based detectors, achieving higher mean Average Precision (mAP) while maintaining competitive inference speeds. For instance, on the DOTA dataset, the BBAVectors method achieved an mAP of 75.36%, outperforming the ROI Transformer baseline. Moreover, the method maintains real-time performance capabilities, crucial for deployment in real-world applications.

Implications and Future Work

This work contributes to the field of computer vision by providing an efficient and effective solution for oriented object detection in complex scenes, such as those found in aerial surveillance. The novel approach of using BBAVectors offers a robust alternative to angle-based bounding boxes, particularly relevant in scenarios requiring precise object orientation and localization.

Future research may focus on refining vector representation to handle more dynamic or less structured object types in aerial imagery. There is also potential for integrating this approach with other modalities of data (e.g., LiDAR) to enhance detection robustness further. Additionally, adapting this method for use in dynamic environments, where object orientation changes over time, could be a valuable direction.

Overall, this paper provides a substantial contribution to oriented object detection, laying groundwork for further advancements in aerial image analysis and object detection methodologies.