Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Hybrid Quantum-Classical Approach based on the Hadamard Transform for the Convolutional Layer (2305.17510v3)

Published 27 May 2023 in cs.CV and eess.SP

Abstract: In this paper, we propose a novel Hadamard Transform (HT)-based neural network layer for hybrid quantum-classical computing. It implements the regular convolutional layers in the Hadamard transform domain. The idea is based on the HT convolution theorem which states that the dyadic convolution between two vectors is equivalent to the element-wise multiplication of their HT representation. Computing the HT is simply the application of a Hadamard gate to each qubit individually, so the HT computations of our proposed layer can be implemented on a quantum computer. Compared to the regular Conv2D layer, the proposed HT-perceptron layer is computationally more efficient. Compared to a CNN with the same number of trainable parameters and 99.26\% test accuracy, our HT network reaches 99.31\% test accuracy with 57.1\% MACs reduced in the MNIST dataset; and in our ImageNet-1K experiments, our HT-based ResNet-50 exceeds the accuracy of the baseline ResNet-50 by 0.59\% center-crop top-1 accuracy using 11.5\% fewer parameters with 12.6\% fewer MACs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Akhauri, Y. Hadanets: Flexible quantization strategies for neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp.  0–0, 2019.
  2. Discrete cosine transform based causal convolutional neural network for drift compensation in chemical sensors. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.  8012–8016. IEEE, 2021.
  3. Denoising and robust nonlinear wavelet analysis. In Wavelet applications, volume 2242, pp.  325–336. SPIE, 1994.
  4. Block wavelet transforms for image coding. IEEE Transactions on Circuits and Systems for Video Technology, 3(6):433–435, 1993.
  5. Adaptive wavelet thresholding for image denoising and compression. IEEE transactions on image processing, 9(9):1532–1546, 2000.
  6. Fast fourier convolution. Advances in Neural Information Processing Systems, 33:4479–4488, 2020.
  7. Quantum convolutional neural networks. Nature Physics, 15(12):1273–1278, 2019.
  8. Energy efficient hadamard neural networks. arXiv preprint arXiv:1805.05421, 2018.
  9. Donoho, D. L. De-noising by soft-thresholding. IEEE transactions on information theory, 41(3):613–627, 1995.
  10. Less is more: Accelerating faster neural networks straight from jpeg. In Iberoamerican Congress on Pattern Recognition, pp. 237–247. Springer, 2021.
  11. The good, the bad, and the ugly: Neural networks straight from jpeg. In 2020 IEEE International Conference on Image Processing (ICIP), pp.  1896–1900. IEEE, 2020.
  12. Unified matrix treatment of the fast walsh-hadamard transform. IEEE Transactions on Computers, 25(11):1142–1146, 1976.
  13. Calculation of dyadic convolution using graphics processing units and opencl. matrix, 1:8, 2011.
  14. Faster neural networks straight from jpeg. Advances in Neural Information Processing Systems, 31, 2018.
  15. Gulamhusein, M. Simple matrix-theory proof of the discrete dyadic convolution theorem. Electronics Letters, 10(9):238–239, 1973.
  16. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  770–778, 2016.
  17. Kak, S. C. Quantum neural computing. Advances in imaging and electron physics, 94:259–313, 1995.
  18. A simulation study to evaluate the performance of the cauchy proximal operator in despeckling sar images of the sea surface. In IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, pp.  1568–1571. IEEE, 2020.
  19. Quantum synchronization on the ibm q system. Physical Review Research, 2(2):023026, 2020.
  20. Le Gall, D. Mpeg: A video compression standard for multimedia applications. Communications of the ACM, 34(4):46–58, 1991.
  21. Hybrid quantum-classical convolutional neural networks. Science China Physics, Mechanics & Astronomy, 64(9):1–8, 2021.
  22. Reducing the overhead of mapping quantum circuits to ibm q system. In 2019 IEEE International Symposium on Circuits and Systems (ISCAS), pp.  1–5. IEEE, 2019.
  23. A substitution of convolutional layers by fft layers-a low computational cost version. In 2021 IEEE International Ultrasonics Symposium (IUS), pp. 1–3. IEEE, 2021.
  24. Fast walsh-hadamard transform and smooth-thresholding based binary layers in deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  4650–4659, 2021.
  25. Block walsh-hadamard transform based binary layers in deep neural networks. ACM Transactions on Embedded Computing Systems (TECS), 2022a.
  26. Deep neural network with walsh-hadamard transform layer for ember detection during a wildfire. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  257–266, 2022b.
  27. Real-time wireless ecg-derived respiration rate estimation using an autoencoder with a dct layer. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.  1–5. IEEE, 2023.
  28. Energy-efficient image processing using binary neural networks with hadamard transforms. In Proceedings of the Asian Conference on Computer Vision, pp.  4711–4725, 2022.
  29. Training a quantum neural network. Advances in neural information processing systems, 16, 2003.
  30. A hybrid classical-quantum algorithm for digital image processing. Quantum Information Processing, 22(1):1–19, 2023a.
  31. A hybrid classical-quantum algorithm for solution of nonlinear ordinary differential equations. Applied Mathematics and Computation, 442:127708, 2023b.
  32. Harmonic convolutional networks based on discrete cosine transform. Pattern Recognition, 129:108707, 2022.
  33. Using of discrete orthogonal transforms for convolution. J. Electrical Eng, 53:285–288, 2002a.
  34. Walsh–hadamard transformation of a convolution. Radioengineering, 11(3):40–42, 2002b.
  35. Vetterli, M. Fast 2-d discrete cosine transform. In ICASSP’85. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 10, pp.  1538–1541. IEEE, 1985.
  36. Wallace, G. K. The jpeg still picture compression standard. Communications of the ACM, 34(4):30–44, 1991.
  37. Walsh, J. L. A closed set of normal orthogonal functions. American Journal of Mathematics, 45(1):5–24, 1923.
  38. Implementation of the quantum fourier transform. Physical review letters, 86(9):1889, 2001.
  39. Dct-based fast spectral convolution for deep convolutional neural networks. In 2021 International Joint Conference on Neural Networks (IJCNN), pp.  1–8. IEEE, 2021.
  40. Yuen, C.-K. Remarks on ordering of walsh-functions. IEEE Transactions on Computers, 100(12):1452–1452, 1972.
  41. Zeiler, M. D. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
  42. Zero initialization: Initializing residual networks with only zeros and ones. arXiv preprint arXiv:2110.12661, 2021.
  43. Improved threshold denoising method based on wavelet transform. In 2015 7th International Conference on Modelling, Identification and Control (ICMIC), pp.  1–4. IEEE, 2015.
Citations (16)

Summary

We haven't generated a summary for this paper yet.