Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SHA-CNN: Scalable Hierarchical Aware Convolutional Neural Network for Edge AI (2407.21370v1)

Published 31 Jul 2024 in cs.NE

Abstract: This paper introduces a Scalable Hierarchical Aware Convolutional Neural Network (SHA-CNN) model architecture for Edge AI applications. The proposed hierarchical CNN model is meticulously crafted to strike a balance between computational efficiency and accuracy, addressing the challenges posed by resource-constrained edge devices. SHA-CNN demonstrates its efficacy by achieving accuracy comparable to state-of-the-art hierarchical models while outperforming baseline models in accuracy metrics. The key innovation lies in the model's hierarchical awareness, enabling it to discern and prioritize relevant features at multiple levels of abstraction. The proposed architecture classifies data in a hierarchical manner, facilitating a nuanced understanding of complex features within the datasets. Moreover, SHA-CNN exhibits a remarkable capacity for scalability, allowing for the seamless incorporation of new classes. This flexibility is particularly advantageous in dynamic environments where the model needs to adapt to evolving datasets and accommodate additional classes without the need for extensive retraining. Testing has been conducted on the PYNQ Z2 FPGA board to validate the proposed model. The results achieved an accuracy of 99.34%, 83.35%, and 63.66% for MNIST, CIFAR-10, and CIFAR-100 datasets, respectively. For CIFAR-100, our proposed architecture performs hierarchical classification with 10% reduced computation while compromising only 0.7% accuracy with the state-of-the-art. The adaptability of SHA-CNN to FPGA architecture underscores its potential for deployment in edge devices, where computational resources are limited. The SHA-CNN framework thus emerges as a promising advancement in the intersection of hierarchical CNNs, scalability, and FPGA-based Edge AI.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  2. X. Zhu and M. Bain, “B-CNN: Branch convolutional neural network for hierarchical classification,” arXiv preprint arXiv:1709.09890, 2017.
  3. Z. Yan, H. Zhang, R. Piramuthu, V. Jagadeesh, D. DeCoste, W. Di, and Y. Yu, “HD-CNN: Hierarchical deep convolutional neural networks for large scale visual recognition,” Proceedings of the IEEE International Conference on Computer Vision, pp. 2740–2748, 2015.
  4. B. Kolisnik, I. Hogan, and F. Zulkernine, “Condition-CNN: A hierarchical multi-label fashion image classification model,” Expert Systems with Applications, vol. 182, 2021.
  5. J. Park, H. Kim, and J. Paik, “CF-CNN: Coarse-to-fine convolutional neural network,” Applied Sciences, vol. 11, no. 8, 2021.
  6. S. Taoufiq, B. Nagy, and C. Benedek, “Hierarchynet: Hierarchical cnn-based urban building classification,” Remote Sensing, vol. 12, no. 22, 2020.
  7. R. La Grassa, I. Gallo, and N. Landro, “Learn class hierarchy using convolutional neural networks,” Applied Intelligence, pp. 1–11, 2021.
  8. A. Mukherjee, I. Garg, and K. Roy, “Encoding hierarchical information in neural networks helps in subpopulation shift,” IEEE Transactions on Artificial Intelligence, 2023.
  9. M.-S. Mayouf and F. Dupin de Saint-Cyr, “GH-CNN: A new cnn for coherent hierarchical classification,” International Conference on Artificial Neural Networks, pp. 669–681, 2022.
  10. X. Mao, S. Hijazi, R. Casas, P. Kaul, R. Kumar, and C. Rowen, “Hierarchical cnn for traffic sign recognition,” 2016 IEEE Intelligent Vehicles Symposium (IV), pp. 130–135, 2016.
  11. M.-M. Cheng, P.-T. Jiang, L.-H. Han, L. Wang, and P. Torr, “Deeply explain cnn via hierarchical decomposition,” International Journal of Computer Vision, vol. 131, no. 5, pp. 1091–1105, 2023.
  12. R. Kuttala, R. Subramanian, and V. R. M. Oruganti, “Multimodal hierarchical cnn feature fusion for stress detection,” IEEE Access, vol. 11, pp. 6867–6878, 2023.
  13. Y. Guo, Y. Liu, E. M. Bakker, Y. Guo, and M. S. Lew, “CNN-RNN: A large-scale hierarchical image classification framework,” Multimedia tools and applications, vol. 77, no. 8, pp. 10 251–10 271, 2018.
  14. D. Roy, P. Panda, and K. Roy, “Tree-CNN: A hierarchical deep convolutional neural network for incremental learning,” Neural Networks, vol. 121, pp. 148–160, 2020.
  15. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” pp. 248–255, 2009.
  16. A. Bilal, A. Jourabloo, M. Ye, X. Liu, and L. Ren, “Do convolutional neural networks learn class hierarchy?” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, pp. 152–162, 2017.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Narendra Singh Dhakad (2 papers)
  2. Yuvnish Malhotra (1 paper)
  3. Santosh Kumar Vishvakarma (9 papers)
  4. Kaushik Roy (265 papers)