Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wise-SrNet: A Novel Architecture for Enhancing Image Classification by Learning Spatial Resolution of Feature Maps (2104.12294v3)

Published 26 Apr 2021 in cs.CV and cs.AI

Abstract: One of the main challenges since the advancement of convolutional neural networks is how to connect the extracted feature map to the final classification layer. VGG models used two sets of fully connected layers for the classification part of their architectures, which significantly increased the number of models' weights. ResNet and the next deep convolutional models used the Global Average Pooling (GAP) layer to compress the feature map and feed it to the classification layer. Although using the GAP layer reduces the computational cost, but also causes losing spatial resolution of the feature map, which results in decreasing learning efficiency. In this paper, we aim to tackle this problem by replacing the GAP layer with a new architecture called Wise-SrNet. It is inspired by the depthwise convolutional idea and is designed for processing spatial resolution while not increasing computational cost. We have evaluated our method using three different datasets: Intel Image Classification Challenge, MIT Indoors Scenes, and a part of the ImageNet dataset. We investigated the implementation of our architecture on several models of the Inception, ResNet, and DenseNet families. Applying our architecture has revealed a significant effect on increasing convergence speed and accuracy. Our Experiments on images with 224*224 resolution increased the Top-1 accuracy between 2% to 8% on different datasets and models. Running our models on 512*512 resolution images of the MIT Indoors Scenes dataset showed a notable result of improving the Top-1 accuracy within 3% to 26%. We will also demonstrate the GAP layer's disadvantage when the input images are large and the number of classes is not few. In this circumstance, our proposed architecture can do a great help in enhancing classification results. The code is shared at https://github.com/mr7495/image-classification-spatial.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. Intel image classification challenge. https://www.kaggle.com/puneet6060/intel-image-classification.
  2. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
  3. Deep weighted averaging classifiers. Proceedings of the Conference on Fairness, Accountability, and Transparency, Jan 2019.
  4. F. Chollet. Xception: Deep learning with depthwise separable convolutions, 2017.
  5. F. Chollet and Others. keras, 2015.
  6. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009.
  7. Deep residual learning for image recognition, 2015.
  8. Identity mappings in deep residual networks, 2016.
  9. Mobilenets: Efficient convolutional neural networks for mobile vision applications, 2017.
  10. Densely connected convolutional networks, 2018.
  11. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.
  12. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  13. Network in network. arXiv preprint arXiv:1312.4400, 2013.
  14. Histogram layers for texture analysis, 2021.
  15. S. Qiu. Global weighted average pooling bridges pixel-level localization and image-level classification, 2018.
  16. A. Quattoni and A. Torralba. Recognizing indoor scenes. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 413–420. IEEE, 2009.
  17. M. Rahimzadeh and M. R. Mohammadi. Roct-net: A new ensemble deep convolutional model with improved spatial resolution learning for detecting common diseases from retinal oct images. In 2021 11th International Conference on Computer Engineering and Knowledge (ICCKE), pages 85–91. IEEE, 2021.
  18. Grad-cam: Visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision, 128(2):336–359, Oct 2019.
  19. Fosnet: An end-to-end trainable deep neural network for scene recognition. IEEE Access, 8:82066–82077, 2020.
  20. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition, 2015.
  21. Ensemv3x: a novel ensembled deep learning architecture for multi-label scene classification. PeerJ Computer Science, 7:e557, 2021.
  22. Inception-v4, inception-resnet and the impact of residual connections on learning, 2016.
  23. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
  24. M. Tan and Q. V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks, 2020.
  25. B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning, 2017.
  26. Learning transferable architectures for scalable image recognition, 2018.
Citations (11)

Summary

We haven't generated a summary for this paper yet.