Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Input Guided Multiple Deconstruction Single Reconstruction neural network models for Matrix Factorization (2405.13449v1)

Published 22 May 2024 in cs.LG and cs.AI

Abstract: Referring back to the original text in the course of hierarchical learning is a common human trait that ensures the right direction of learning. The models developed based on the concept of Non-negative Matrix Factorization (NMF), in this paper are inspired by this idea. They aim to deal with high-dimensional data by discovering its low rank approximation by determining a unique pair of factor matrices. The model, named Input Guided Multiple Deconstruction Single Reconstruction neural network for Non-negative Matrix Factorization (IG-MDSR-NMF), ensures the non-negativity constraints of both factors. Whereas Input Guided Multiple Deconstruction Single Reconstruction neural network for Relaxed Non-negative Matrix Factorization (IG-MDSR-RNMF) introduces a novel idea of factorization with only the basis matrix adhering to the non-negativity criteria. This relaxed version helps the model to learn more enriched low dimensional embedding of the original data matrix. The competency of preserving the local structure of data in its low rank embedding produced by both the models has been appropriately verified. The superiority of low dimensional embedding over that of the original data justifying the need for dimension reduction has been established. The primacy of both the models has also been validated by comparing their performances separately with that of nine other established dimension reduction algorithms on five popular datasets. Moreover, computational complexity of the models and convergence analysis have also been presented testifying to the supremacy of the models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755):788–791, 1999.
  2. Algorithms for Non-negative Matrix Factorization. In Proceedings of the Advances in Neural Information Processing Systems, pages 556–562, 2001.
  3. A deep semi-nmf model for learning hidden representations. In International conference on machine learning, pages 1692–1700. PMLR, 2014.
  4. A deep matrix factorization method for learning attribute representations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(3):417–429, 2016.
  5. Deep autoencoder-like nonnegative matrix factorization for community detection. In Proceedings of the 27th ACM international conference on information and knowledge management, pages 1393–1402, 2018.
  6. Hierarchical feature extraction by multi-layer non-negative matrix factorization network for classification task. Neurocomputing, 165:63–74, 2015.
  7. Sparse deep nonnegative matrix factorization. Big Data Mining and Analytics, 3(1):13–28, 2019.
  8. Learning the hierarchical parts of objects by deep non-smooth nonnegative matrix factorization. IEEE Access, 6:58096–58105, 2018.
  9. Orthogonal Nonnegative Matrix Factorization using a novel deep Autoencoder Network. Knowledge-Based Systems, 227:107236, 2021.
  10. Deep non-negative matrix factorization architecture based on underlying basis images learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(6):1897–1913, 2019.
  11. Deep Grouped Non-Negative Matrix Factorization Method for Image Data Representation. In 2021 International Conference on Machine Learning and Cybernetics (ICMLC), pages 1–6. IEEE, 2021.
  12. Deep semi-nonnegative matrix factorization with elastic preserving for data representation. Multimedia Tools and Applications, 80(2):1707–1724, 2021.
  13. Feature extraction based on the non-negative matrix factorization of convolutional neural networks for monitoring domestic activity with acoustic signals. IEEE Access, 8:122384–122395, 2020.
  14. Nonlinear Non-negative Matrix Factorization using Deep Learning. In 2016 International Joint Conference on Neural Networks (IJCNN), pages 477–482. IEEE, 2016.
  15. A deep discriminative and robust nonnegative matrix factorization network method with soft label constraint. Neural Computing and Applications, 31(11):7447–7475, 2019.
  16. Robust Graph Regularized NMF with Dissimilarity and Similarity Constraints for ScRNA-seq Data Clustering. Journal of Chemical Information and Modeling, 62(23):6271–6286, 2022.
  17. A Neural Network Model for Matrix Factorization: Dimensionality Reduction. In 2022 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), pages 1–6. IEEE, 2022.
  18. n2MFn2: Non-negative Matrix Factorization in A Single Deconstruction Single Reconstruction Neural Network Framework for Dimensionality Reduction. In 2022 International Conference on High Performance Big Data and Intelligent Systems (HDIS), pages 79–84. IEEE, 2022.
  19. DN3MF: Deep Neural Network for Non-negative Matrix Factorization towards Low Rank Approximation. Pattern Analysis and Applications.
  20. MDSR-NMF: Multiple deconstruction single reconstruction deep neural network model for non-negative matrix factorization. Network: Computation in Neural Systems, 34(4):306–342, 2023. PMID: 37818635.
  21. Subsystem Identification Through Dimensionality Reduction of Large-Scale Gene Expression Data. Genome Research, 13(7):1706–1718, 2003.
  22. Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research, 12:2825–2830, 2011.
  23. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  24. Computer-aided classification of gastrointestinal lesions in regular colonoscopy. IEEE Transactions on Medical Imaging, 35(9):2051–2063, 2016.
  25. A proactive intelligent decision support system for predicting the popularity of online news. In Proceedings of the Portuguese Conference on Artificial Intelligence, pages 535–546. Springer, 2015.
  26. A comparative analysis of speech signal processing algorithms for Parkinson’s disease classification and the use of the tunable Q-factor wavelet transform. Applied Soft Computing, Elsevier, 74:255–263, 2019.
  27. Paulo Cortez and Alice Maria Gonçalves Silva. Using data mining to predict secondary school student performance. In the Proceedings of 5th Annual Future Business Technology Conference, Porto, Portugal, pages 5–12. The European Multidisciplinary Society for Modelling and Simulation Technology, The European Technology Institute Bvba (EUROSIS-ETI), 2008.
  28. UCI machine learning repository, 2019.
  29. A direct approach for sparse quadratic discriminant analysis. The Journal of Machine Learning Research, 19(1):1098–1134, 2018.
  30. Inverse classification for comparison-based interpretability in machine learning. arXiv preprint arXiv:1712.08443, 2017.
  31. Comparison-based inverse classification for interpretability in machine learning. In Proceedings of the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pages 100–111. Springer, 2018.
  32. The movielens datasets: History and context. Acm Transactions on Interactive Intelligent Systems (TIIS), 5(4):1–19, 2015.
  33. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings, 2010.
  34. Laurens van der Maaten. Learning a Parametric Embedding by Preserving Local Structure. In David van Dyk and Max Welling, editors, Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, volume 5 of Proceedings of Machine Learning Research, pages 384–391, Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA, 16–18 Apr 2009. PMLR.
  35. Neighborhood Preservation in Nonlinear Projection Methods: An Experimental Study. In Georg Dorffner, Horst Bischof, and Kurt” Hornik, editors, Artificial Neural Networks — ICANN 2001, pages 485–491, Berlin, Heidelberg, 2001. Springer Berlin Heidelberg.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets