Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hybrid Quantum-inspired Resnet and Densenet for Pattern Recognition (2403.05754v7)

Published 9 Mar 2024 in cs.LG and cs.ET

Abstract: In this paper, we propose two hybrid quantum-inspired neural networks with adaptive residual and dense connections respectively for pattern recognition. We explain the frameworks of the symmetrical circuit models in the quantum-inspired layers in our hybrid models. We also illustrate the potential superiority of our hybrid models to prevent gradient explosion owing to the sine and cosine functions in the quantum-inspired layers. Groups of numerical experiments on generalization power showcase that our hybrid models are comparable to the pure classical models with different noisy datasets utilized. Furthermore, the comparison between our hybrid models and a state-of-the-art hybrid quantum-classical convolutional network demonstrates 3%-4% higher accuracy of our hybrid densely-connected model than the hybrid quantum-classical network. Additionally, compared with other two hybrid quantum-inspired residual networks, our hybrid models showcase a little higher accuracy on image datasets with asymmetrical noises. Simultaneously, in terms of groups of robustness experiments, the outcomes demonstrate that our two hybrid models outperform pure classical models notably in resistance to adversarial parameter attacks with various asymmetrical noises. They also indicate the slight superiority of our densely-connected hybrid model over the hybrid quantum-classical network to both symmetrical and asymmetrical attacks. Meanwhile, the accuracy of our two hybrid models is a little bit higher than that of the two hybrid quantum-inspired residual networks. In addition, an ablation study indicate that the recognition accuracy of our two hybrid models is 2%-3% higher than that of the traditional quantum-inspired neural network without residual or dense connection. Eventually, we discuss the application scenarios of our hybrid models by analyzing their computational complexity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
  2. J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural networks, vol. 61, pp. 85–117, 2015.
  3. L. Deng, D. Yu et al., “Deep learning: methods and applications,” Foundations and trends® in signal processing, vol. 7, no. 3–4, pp. 197–387, 2014.
  4. Y. Guo, Y. Liu, A. Oerlemans, S. Lao, S. Wu, and M. S. Lew, “Deep learning for visual understanding: A review,” Neurocomputing, vol. 187, pp. 27–48, 2016.
  5. J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, “Multimodal deep learning,” in Proceedings of the 28th international conference on machine learning (ICML-11), 2011, pp. 689–696.
  6. I. A. Basheer and M. Hajmeer, “Artificial neural networks: fundamentals, computing, design, and application,” Journal of microbiological methods, vol. 43, no. 1, pp. 3–31, 2000.
  7. S. B. Laughlin and T. J. Sejnowski, “Communication in neuronal networks,” Science, vol. 301, no. 5641, pp. 1870–1874, 2003.
  8. R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448.
  9. Q. Chen, Y. Wang, T. Yang, X. Zhang, J. Cheng, and J. Sun, “You only look one-level feature,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 13 039–13 048.
  10. Y. Lin, I. Koprinska, and M. Rana, “Ssdnet: State space decomposition neural network for time series forecasting,” in 2021 IEEE International Conference on Data Mining (ICDM).   IEEE, 2021, pp. 370–378.
  11. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881–2890.
  12. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  13. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  14. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18.   Springer, 2015, pp. 234–241.
  15. F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 fourth international conference on 3D vision (3DV).   Ieee, 2016, pp. 565–571.
  16. S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image segmentation using deep learning: A survey,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 7, pp. 3523–3542, 2021.
  17. A. Hering, L. Hansen, T. C. Mok, A. C. Chung, H. Siebert, S. Häger, A. Lange, S. Kuckertz, S. Heldmann, W. Shao et al., “Learn2reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning,” IEEE Transactions on Medical Imaging, vol. 42, no. 3, pp. 697–712, 2022.
  18. Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou, “A survey of convolutional neural networks: analysis, applications, and prospects,” IEEE transactions on neural networks and learning systems, 2021.
  19. Y. Shen and J. Wang, “Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances,” IEEE transactions on neural networks and learning systems, vol. 23, no. 1, pp. 87–96, 2011.
  20. J. Wang, A. Pal, Q. Yang, K. Kant, K. Zhu, and S. Guo, “Collaborative machine learning: Schemes, robustness, and privacy,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  21. K. Zhou, Z. Liu, Y. Qiao, T. Xiang, and C. C. Loy, “Domain generalization: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  22. P. R. Bassi, S. S. Dertkigil, and A. Cavalli, “Improving deep neural network generalization and robustness to background bias via layer-wise relevance propagation optimization,” Nature Communications, vol. 15, no. 1, p. 291, 2024.
  23. A. R. Muotri and F. H. Gage, “Generation of neuronal variability and complexity,” Nature, vol. 441, no. 7097, pp. 1087–1093, 2006.
  24. K. Beer, D. Bondarenko, T. Farrelly, T. J. Osborne, R. Salzmann, D. Scheiermann, and R. Wolf, “Training deep quantum neural networks,” Nature communications, vol. 11, no. 1, p. 808, 2020.
  25. L.-J. Wang, J.-Y. Lin, and S. Wu, “Implementation of quantum stochastic walks for function approximation, two-dimensional data classification, and sequence classification,” Physical Review Research, vol. 4, no. 2, p. 023058, 2022.
  26. J.-Y. Lin, X.-Y. Li, Y.-H. Shao, W. Wang, and S. Wu, “Implementing arbitrary quantum operations via quantum walks on a cycle graph,” Physical Review A, vol. 107, no. 4, p. 042405, 2023.
  27. M.-G. Zhou, Z.-P. Liu, H.-L. Yin, C.-L. Li, T.-K. Xu, and Z.-B. Chen, “Quantum neural network for quantum neural computing,” Research, vol. 6, p. 0134, 2023.
  28. D. Szwarcman, D. Civitarese, and M. Vellasco, “Quantum-inspired neural architecture search,” in 2019 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2019, pp. 1–8.
  29. P. J. Coles, “Seeking quantum advantage for neural networks,” Nature Computational Science, vol. 1, no. 6, pp. 389–390, 2021.
  30. W. Ye, R. Liu, Y. Li, and L. Jiao, “Quantum-inspired evolutionary algorithm for convolutional neural networks architecture search,” in 2020 IEEE Congress on Evolutionary Computation (CEC).   IEEE, 2020, pp. 1–8.
  31. S. Song, Y. Hou, and G. Liu, “The interpretability of quantum-inspired neural network,” in 2021 4th International Conference on Artificial Intelligence and Big Data (ICAIBD).   IEEE, 2021, pp. 294–298.
  32. B. Kosko, K. Audhkhasi, and O. Osoba, “Noise can speed backpropagation learning and deep bidirectional pretraining,” Neural Networks, vol. 129, pp. 359–384, 2020.
  33. N. Semenova, L. Larger, and D. Brunner, “Understanding and mitigating noise in trained deep neural networks,” Neural Networks, vol. 146, pp. 151–160, 2022.
  34. Y. Xiao, M. Adegok, C.-S. Leung, and K. W. Leung, “Robust noise-aware algorithm for randomized neural network and its convergence properties,” Neural Networks, p. 106202, 2024.
  35. D. F. Nettleton, A. Orriols-Puig, and A. Fornells, “A study of the effect of different types of noise on the precision of supervised learning techniques,” Artificial intelligence review, vol. 33, pp. 275–306, 2010.
  36. Y. Liang, W. Peng, Z.-J. Zheng, O. Silvén, and G. Zhao, “A hybrid quantum–classical neural network with deep residual learning,” Neural Networks, vol. 143, pp. 133–147, 2021.
  37. N. Schetakis, D. Aghamalyan, P. Griffin, and M. Boguslavsky, “Review of some existing qml frameworks and novel hybrid classical–quantum neural networks realising binary classification for the noisy datasets,” Scientific Reports, vol. 12, no. 1, p. 11927, 2022.
  38. D. Konar, A. D. Sarma, S. Bhandary, S. Bhattacharyya, A. Cangi, and V. Aggarwal, “A shallow hybrid classical–quantum spiking feedforward neural network for noise-robust image classification,” Applied Soft Computing, vol. 136, p. 110099, 2023.
  39. P. Li, H. Xiao, F. Shang, X. Tong, X. Li, and M. Cao, “A hybrid quantum-inspired neural networks with sequence inputs,” Neurocomputing, vol. 117, pp. 81–90, 2013.
  40. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  41. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  42. W. R. Clements, P. C. Humphreys, B. J. Metcalf, W. S. Kolthammer, and I. A. Walmsley, “Optimal design for universal multiport interferometers,” Optica, vol. 3, no. 12, pp. 1460–1465, 2016.
  43. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikit-learn: Machine learning in python,” the Journal of machine Learning research, vol. 12, pp. 2825–2830, 2011.
  44. K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, “Quantum circuit learning,” Physical Review A, vol. 98, no. 3, p. 032309, 2018.
  45. G. Chiribella, G. M. D’Ariano, and P. Perinotti, “Quantum circuit architecture,” Physical review letters, vol. 101, no. 6, p. 060401, 2008.
  46. B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature, vol. 381, no. 6583, pp. 607–609, 1996.
  47. W. E. Vinje and J. L. Gallant, “Sparse coding and decorrelation in primary visual cortex during natural vision,” Science, vol. 287, no. 5456, pp. 1273–1276, 2000.
  48. S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3–19.
  49. S. Chatterjee and P. Zielinski, “On the generalization mystery in deep learning,” arXiv preprint arXiv:2203.10036, 2022.
  50. T. Lin and H. Zha, “Riemannian manifold learning,” IEEE transactions on pattern analysis and machine intelligence, vol. 30, no. 5, pp. 796–809, 2008.
  51. I. Safran and O. Shamir, “Depth-width tradeoffs in approximating natural functions with neural networks,” in Proceedings of the 34th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, D. Precup and Y. W. Teh, Eds., vol. 70.   PMLR, 06–11 Aug 2017, pp. 2979–2987. [Online]. Available: https://proceedings.mlr.press/v70/safran17a.html
  52. Z. Lu, H. Pu, F. Wang, Z. Hu, and L. Wang, “The expressive power of neural networks: A view from the width,” Advances in neural information processing systems, vol. 30, 2017.
  53. R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in 29th Annual Conference on Learning Theory, ser. Proceedings of Machine Learning Research, V. Feldman, A. Rakhlin, and O. Shamir, Eds., vol. 49.   Columbia University, New York, New York, USA: PMLR, 23–26 Jun 2016, pp. 907–940. [Online]. Available: https://proceedings.mlr.press/v49/eldan16.html

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com