SCAResNet: A ResNet Variant Optimized for Tiny Object Detection in Transmission and Distribution Towers (2404.04179v1)
Abstract: Traditional deep learning-based object detection networks often resize images during the data preprocessing stage to achieve a uniform size and scale in the feature map. Resizing is done to facilitate model propagation and fully connected classification. However, resizing inevitably leads to object deformation and loss of valuable information in the images. This drawback becomes particularly pronounced for tiny objects like distribution towers with linear shapes and few pixels. To address this issue, we propose abandoning the resizing operation. Instead, we introduce Positional-Encoding Multi-head Criss-Cross Attention. This allows the model to capture contextual information and learn from multiple representation subspaces, effectively enriching the semantics of distribution towers. Additionally, we enhance Spatial Pyramid Pooling by reshaping three pooled feature maps into a new unified one while also reducing the computational burden. This approach allows images of different sizes and scales to generate feature maps with uniform dimensions and can be employed in feature map propagation. Our SCAResNet incorporates these aforementioned improvements into the backbone network ResNet. We evaluated our SCAResNet using the Electric Transmission and Distribution Infrastructure Imagery dataset from Duke University. Without any additional tricks, we employed various object detection models with Gaussian Receptive Field based Label Assignment as the baseline. When incorporating the SCAResNet into the baseline model, we achieved a 2.1% improvement in mAPs. This demonstrates the advantages of our SCAResNet in detecting transmission and distribution towers and its value in tiny object detection. The source code is available at https://github.com/LisavilaLee/SCAResNet_mmdet.
- C. Xu, J. Wang, W. Yang, and L. Yu, “Dot distance for tiny object detection in aerial images,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1192–1201, 2021.
- T.-Y. Lin, P. Dollár, R. Girshick, K. He, et al., “Feature pyramid networks for object detection,” 2017.
- C. Xu, J. Wang, W. Yang, H. Yu, et al., “Rfla: Gaussian receptive field based label assignment for tiny object detection,” 2022.
- J. Wang, C. Xu, W. Yang, and L. Yu, “A normalized gaussian wasserstein distance for tiny object detection,” 2022.
- W. Luo, Y. Li, R. Urtasun, and R. Zemel, “Understanding the effective receptive field in deep convolutional neural networks,” 2017.
- D. He, Q. Shi, X. Liu, et al., “Generating annual high resolution land cover products for 28 metropolises in china based on a deep super-resolution mapping network using landsat imagery,” GIScience & Remote Sensing, vol. 59, no. 1, pp. 2036–2067, 2022.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” CoRR, vol. abs/1512.03385, 2015.
- K. Bradbury, Q. Han, V. Nair, , et al., “Electric Transmission and Distribution Infrastructure Imagery Dataset,” 8 2018.
- Z. Huang, X. Wang, Y. Wei, L. Huang, et al., “Ccnet: Criss-cross attention for semantic segmentation,” 2020.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, et al., “Attention is all you need,” 2017.
- K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” pp. 346–361, Springer International Publishing, 2014.
- C.-Y. Wang, H.-Y. M. Liao, I.-H. Yeh, et al., “Cspnet: A new backbone that can enhance learning capability of cnn,” 2019.
- D. He, Q. Shi, X. Liu, Y. Zhong, et al., “Generating annual high resolution land cover products for 28 metropolises in china based on a deep super-resolution mapping network using landsat imagery,” GIScience & Remote Sensing, vol. 59, no. 1, pp. 2036–2067, 2022.
- Q. Shi, M. Liu, A. Marinoni, and X. Liu, “Ugs-1m: fine-grained urban green space mapping of 31 major cities in china based on the deep learning framework,” Earth System Science Data, vol. 15, no. 2, pp. 555–577, 2023.
- F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” 2017.
- J. Hu, L. Shen, S. Albanie, G. Sun, et al., “Squeeze-and-excitation networks,” 2019.
- T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft coco: Common objects in context,” 2015.
- A. Paszke, S. Gross, F. Massa, A. Lerer, et al., “Pytorch: An imperative style, high-performance deep learning library,” 2019.
- K. Chen, J. Wang, J. Pang, Y. Cao, et al., “Mmdetection: Open mmlab detection toolbox and benchmark,” 2019.
- O. Russakovsky, J. Deng, H. Su, J. Krause, et al., “Imagenet large scale visual recognition challenge,” 2015.
- Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” 2017.
- S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” 2016.
- Z. Tian, C. Shen, H. Chen, and T. He, “Fcos: Fully convolutional one-stage object detection,” 2019.
- Weile Li (3 papers)
- Muqing Shi (1 paper)
- Zhonghua Hong (1 paper)