Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Edge Intelligence with Highly Discriminant LNT Features (2312.14968v1)

Published 19 Dec 2023 in eess.IV, cs.CV, and cs.LG

Abstract: AI algorithms at the edge demand smaller model sizes and lower computational complexity. To achieve these objectives, we adopt a green learning (GL) paradigm rather than the deep learning paradigm. GL has three modules: 1) unsupervised representation learning, 2) supervised feature learning, and 3) supervised decision learning. We focus on the second module in this work. In particular, we derive new discriminant features from proper linear combinations of input features, denoted by x, obtained in the first module. They are called complementary and raw features, respectively. Along this line, we present a novel supervised learning method to generate highly discriminant complementary features based on the least-squares normal transform (LNT). LNT consists of two steps. First, we convert a C-class classification problem to a binary classification problem. The two classes are assigned with 0 and 1, respectively. Next, we formulate a least-squares regression problem from the N-dimensional (N-D) feature space to the 1-D output space, and solve the least-squares normal equation to obtain one N-D normal vector, denoted by a1. Since one normal vector is yielded by one binary split, we can obtain M normal vectors with M splits. Then, Ax is called an LNT of x, where transform matrix A in R{M by N} by stacking ajT, j=1, ..., M, and the LNT, Ax, can generate M new features. The newly generated complementary features are shown to be more discriminant than the raw features. Experiments show that the classification performance can be improved by these new features.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, pp. 436–44, 05 2015.
  2. C.-C. J. Kuo, “Understanding convolutional neural networks with a mathematical model,” Journal of Visual Communication and Image Representation, vol. 41, pp. 406–413, 2016.
  3. ——, “The CNN as a guided multilayer RECOS transform,” IEEE signal processing magazine, vol. 34, no. 3, pp. 81–89, 2017.
  4. C.-C. J. Kuo and Y. Chen, “On data-driven Saak transform,” Journal of Visual Communication and Image Representation, vol. 50, pp. 237–246, 2018.
  5. C.-C. J. Kuo, M. Zhang, S. Li, J. Duan, and Y. Chen, “Interpretable convolutional neural networks via feedforward design,” Journal of Visual Communication and Image Representation, vol. 60, pp. 346–359, 2019.
  6. Y. Yang, W. Wang, H. Fu, and C.-C. J. Kuo, “On supervised feature selection from high dimensional feature spaces,” arXiv preprint arXiv:2203.11924, 2022.
  7. C.-C. J. Kuo and A. M. Madni, “Green learning: Introduction, examples and outlook,” Journal of Visual Communication and Image Representation, vol. 90, p. 103685, 2023.
  8. H. Fu, Y. Yang, V. K. Mishra, and C.-C. J. Kuo, “Subspace learning machine (slm): Methodology and performance,” in 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2023, pp. 1–5.
  9. H. Fu, Y. Yang, Y. Liu, J. Lin, E. Harrison, V. K. Mishra, and C.-C. J. Kuo, “Acceleration of subspace learning machine via particle swarm optimization and parallel processing,” in 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC).   IEEE, 2022, pp. 1019–1024.
  10. Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge intelligence: Paving the last mile of artificial intelligence with edge computing,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, 2019.
  11. S. Deng, H. Zhao, W. Fang, J. Yin, S. Dustdar, and A. Y. Zomaya, “Edge intelligence: The confluence of edge computing and artificial intelligence,” IEEE Internet of Things Journal, vol. 7, no. 8, pp. 7457–7469, 2020.
  12. D. Xu, T. Li, Y. Li, X. Su, S. Tarkoma, T. Jiang, J. Crowcroft, and P. Hui, “Edge intelligence: Empowering intelligence to the edge of network,” Proceedings of the IEEE, vol. 109, no. 11, pp. 1778–1837, 2021.
  13. F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain.” Psychological review, vol. 65, no. 6, p. 386, 1958.
  14. R. Lin, Z. Zhou, S. You, R. Rao, and C.-C. J. Kuo, “From two-class linear discriminant analysis to interpretable multilayer perceptron design,” arXiv preprint arXiv:2009.04442, 2020.
  15. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computation, vol. 1, no. 4, pp. 541–551, 1989.
  16. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  17. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  18. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  19. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” 2017.
  20. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  21. T. Liang, J. Glossner, L. Wang, S. Shi, and X. Zhang, “Pruning and quantization for deep neural network acceleration: A survey,” Neurocomputing, vol. 461, pp. 370–403, 2021.
  22. S. Vadera and S. Ameen, “Methods for pruning deep neural networks,” IEEE Access, vol. 10, pp. 63 280–63 300, 2022.
  23. Y. Chen and C.-C. J. Kuo, “Pixelhop: A successive subspace learning (ssl) method for object recognition,” Journal of Visual Communication and Image Representation, vol. 70, p. 102749, 2020.
  24. Y. Chen, M. Rouhsedaghat, S. You, R. Rao, and C.-C. J. Kuo, “Pixelhop++: A small successive-subspace-learning-based (ssl-based) model for image classification,” in 2020 IEEE International Conference on Image Processing (ICIP).   IEEE, 2020, pp. 3294–3298.
  25. Y. Yang, V. Magoulianitis, and C.-C. J. Kuo, “E-pixelhop: An enhanced pixelhop method for object classification,” in 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2021, pp. 1475–1482.
  26. X. Lei, G. Zhao, K. Zhang, and C.-C. J. Kuo, “Tghop: an explainable, efficient, and lightweight method for texture generation,” APSIPA Transactions on Signal and Information Processing, vol. 10, p. e17, 2021.
  27. X. Lei, W. Wang, and C.-C. J. Kuo, “Genhop: An image generation method based on successive subspace learning,” in 2022 IEEE International Symposium on Circuits and Systems (ISCAS).   IEEE, 2022, pp. 3314–3318.
  28. Z. Azizi, C.-C. J. Kuo et al., “Pager: Progressive attribute-guided extendable robust image generation,” APSIPA Transactions on Signal and Information Processing, vol. 11, no. 1, 2022.
  29. M. Rouhsedaghat, Y. Wang, S. Hu, S. You, and C.-C. J. Kuo, “Low-resolution face recognition in resource-constrained environments,” Pattern Recognition Letters, vol. 149, pp. 193–199, 2021.
  30. M. Rouhsedaghat, Y. Wang, X. Ge, S. Hu, S. You, and C.-C. J. Kuo, “Facehop: A light-weight low-resolution face gender classification method,” in International Conference on Pattern Recognition.   Springer, 2021, pp. 169–183.
  31. H.-S. Chen, M. Rouhsedaghat, H. Ghani, S. Hu, S. You, and C.-C. J. Kuo, “Defakehop: A light-weight high-performance deepfake detector,” in 2021 IEEE International Conference on Multimedia and Expo (ICME).   IEEE, 2021, pp. 1–6.
  32. H.-S. Chen, S. Hu, S. You, C.-C. J. Kuo et al., “Defakehop++: An enhanced lightweight deepfake detector,” APSIPA Transactions on Signal and Information Processing, vol. 11, no. 2, 2022.
  33. Z. Mei, Y.-C. Wang, X. He, and C.-C. J. Kuo, “Greenbiqa: A lightweight blind image quality assessment method,” in 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP).   IEEE, 2022, pp. 1–6.
  34. K. Zhang, B. Wang, W. Wang, F. Sohrab, M. Gabbouj, and C.-C. J. Kuo, “Anomalyhop: an ssl-based image anomaly localization method,” in 2021 International Conference on Visual Communications and Image Processing (VCIP).   IEEE, 2021, pp. 1–5.
  35. Y. Zhu, X. Wang, H.-S. Chen, R. Salloum, and C.-C. J. Kuo, “A-pixelhop: A green, robust and explainable fake-image detector,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2022, pp. 8947–8951.
  36. Y. Zhu, X. Wang, R. Salloum, H.-S. Chen, and C.-C. Kuo, “Rggid: A robust and green gan-fake image detector,” APSIPA Transactions on Signal and Information Processing, vol. 11, 01 2022.
  37. Y. Zhu, X. Wang, H.-S. Chen, R. Salloum, and C.-C. J. Kuo, “Green steganalyzer: A green learning approach to image steganalysis,” arXiv e-prints, pp. arXiv–2306, 2023.
  38. T. Xie, B. Wang, and C.-C. J. Kuo, “Graphhop: An enhanced label propagation method for node classification,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  39. T. Xie, R. Kannan, and C.-C. J. Kuo, “Label efficient regularization and propagation for graph node classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  40. X. Liu, F. Xing, C. Yang, C.-C. J. Kuo, S. Babu, G. El Fakhri, T. Jenkins, and J. Woo, “Voxelhop: Successive subspace learning for als disease classification using structural mri,” IEEE journal of biomedical and health informatics, vol. 26, no. 3, pp. 1128–1139, 2021.
  41. M. Zhang, H. You, P. Kadam, S. Liu, and C.-C. J. Kuo, “Pointhop: An explainable machine learning method for point cloud classification,” IEEE Transactions on Multimedia, 2020.
  42. P. Kadam, M. Zhang, S. Liu, and C.-C. J. Kuo, “R-pointhop: A green, accurate, and unsupervised point cloud registration method,” IEEE Transactions on Image Processing, vol. 31, pp. 2710–2725, 2022.
  43. M. Zhang, P. Kadam, S. Liu, and C.-C. J. Kuo, “Gsip: Green semantic segmentation of large-scale indoor point clouds,” Pattern Recognition Letters, vol. 164, pp. 9–15, 2022.
  44. P. Kadam, M. Zhang, J. Gu, S. Liu, and C.-C. J. Kuo, “GreenPCO: An unsupervised lightweight point cloud odometry method,” in 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP).   IEEE, 2022, pp. 01–06.
  45. P. Kadam, H. Prajapati, M. Zhang, J. Xue, S. Liu, and C.-C. J. Kuo, “S3I-PointHop: SO(3)-invariant pointhop for 3d point cloud classification,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2023, pp. 1–5.
  46. H. Xu, Y. Chen, R. Lin, and C.-C. J. Kuo, “Understanding cnn via deep features analysis,” in 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC).   IEEE, 2017, pp. 1052–1060.
  47. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, 2005, pp. 886–893 vol. 1.
  48. Y. Yang, H. Fu, and C.-C. J. Kuo, “Design of supervision-scalable learning systems: Methodology and performance benchmarking,” arXiv preprint arXiv:2206.09061, 2022.
  49. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  50. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
  51. A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” University of Toronto, Toronto, Ontario, Tech. Rep., 2009.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com