Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Provable Tensor Completion with Graph Information (2310.02543v1)

Published 4 Oct 2023 in cs.LG

Abstract: Graphs, depicting the interrelations between variables, has been widely used as effective side information for accurate data recovery in various matrix/tensor recovery related applications. In this paper, we study the tensor completion problem with graph information. Current research on graph-regularized tensor completion tends to be task-specific, lacking generality and systematic approaches. Furthermore, a recovery theory to ensure performance remains absent. Moreover, these approaches overlook the dynamic aspects of graphs, treating them as static akin to matrices, even though graphs could exhibit dynamism in tensor-related scenarios. To confront these challenges, we introduce a pioneering framework in this paper that systematically formulates a novel model, theory, and algorithm for solving the dynamic graph regularized tensor completion problem. For the model, we establish a rigorous mathematical representation of the dynamic graph, based on which we derive a new tensor-oriented graph smoothness regularization. By integrating this regularization into a tensor decomposition model based on transformed t-SVD, we develop a comprehensive model simultaneously capturing the low-rank and similarity structure of the tensor. In terms of theory, we showcase the alignment between the proposed graph smoothness regularization and a weighted tensor nuclear norm. Subsequently, we establish assurances of statistical consistency for our model, effectively bridging a gap in the theoretical examination of the problem involving tensor recovery with graph information. In terms of the algorithm, we develop a solution of high effectiveness, accompanied by a guaranteed convergence, to address the resulting model. To showcase the prowess of our proposed model in contrast to established ones, we provide in-depth numerical experiments encompassing synthetic data as well as real-world datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (78)
  1. C. Diao, D. Zhang, W. Liang, K.-C. Li, Y. Hong, and J.-L. Gaudiot, “A novel spatial-temporal multi-scale alignment graph neural network security model for vehicles prediction,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 1, pp. 904–914, 2022.
  2. B. B. Gupta, A. Gaurav, E. C. Marín, and W. Alhalabi, “Novel graph-based machine learning technique to secure smart vehicles in intelligent transportation systems,” IEEE Transactions on Intelligent Transportation Systems, 2022.
  3. J. H. Giraldo, S. Javed, and T. Bouwmans, “Graph moving object segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 5, pp. 2485–2503, 2020.
  4. A. Mondal, J. H. Giraldo, T. Bouwmans, A. S. Chowdhury et al., “Moving object detection for event-based vision using graph spectral clustering,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 876–884.
  5. S. Javed, A. Mahmood, J. Dias, L. Seneviratne, and N. Werghi, “Hierarchical spatiotemporal graph regularized discriminative correlation filter for visual object tracking,” IEEE Transactions on Cybernetics, vol. 52, no. 11, pp. 12 259–12 274, 2021.
  6. J. He, Z. Huang, N. Wang, and Z. Zhang, “Learnable graph matching: Incorporating graph partitioning with deep feature learning for multiple object tracking,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5299–5309.
  7. S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui, “Graph neural networks in recommender systems: a survey,” ACM Computing Surveys, vol. 55, no. 5, pp. 1–37, 2022.
  8. L. Wu, J. Li, P. Sun, R. Hong, Y. Ge, and M. Wang, “Diffnet++: A neural influence and interest diffusion network for social recommendation,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 10, pp. 4753–4766, 2020.
  9. H.-C. Yi, Z.-H. You, D.-S. Huang, and C. K. Kwoh, “Graph representation learning in bioinformatics: trends, methods and applications,” Briefings in Bioinformatics, vol. 23, no. 1, p. bbab340, 2022.
  10. W. Wu, W. Zhang, W. Hou, and X. Ma, “Multi-view clustering with graph learning for scrna-seq data,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2023.
  11. C. Jo and K. Lee, “Discrete-valued latent preference matrix estimation with graph side information,” in International Conference on Machine Learning.   PMLR, 2021, pp. 5107–5117.
  12. Q. Zhang, V. Y. Tan, and C. Suh, “Community detection and matrix completion with social and item similarity graphs,” IEEE Transactions on Signal Processing, vol. 69, pp. 917–931, 2021.
  13. Y. Hu, L. Deng, H. Zheng, X. Feng, and Y. Chen, “Network latency estimation with graph-laplacian regularization tensor completion,” in GLOBECOM 2020-2020 IEEE Global Communications Conference.   IEEE, 2020, pp. 1–6.
  14. X. Chen, L.-G. Sun, and Y. Zhao, “Ncmcmda: mirna–disease association prediction through neighborhood constraint matrix completion,” Briefings in Bioinformatics, vol. 22, no. 1, pp. 485–496, 2021.
  15. L. Li, Z. Gao, Y.-T. Wang, M.-W. Zhang, J.-C. Ni, C.-H. Zheng, and Y. Su, “Scmfmda: predicting microrna-disease associations based on similarity constrained matrix factorization,” PLoS Computational Biology, vol. 17, no. 7, p. e1009165, 2021.
  16. F. Huang, X. Yue, Z. Xiong, Z. Yu, S. Liu, and W. Zhang, “Tensor decomposition with relational constraints for predicting multiple types of microrna-disease associations,” Briefings in Bioinformatics, vol. 22, no. 3, p. bbaa140, 2021.
  17. D. Ouyang, R. Miao, J. Wang, X. Liu, S. Xie, N. Ai, Q. Dang, and Y. Liang, “Predicting multiple types of associations between mirnas and diseases based on graph regularized weighted tensor decomposition,” Frontiers in Bioengineering and Biotechnology, vol. 10, p. 911769, 2022.
  18. J. Tang, X. Shu, Z. Li, Y.-G. Jiang, and Q. Tian, “Social anchor-unit graph regularized tensor completion for large-scale image retagging,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 8, pp. 2027–2034, 2019.
  19. L. Deng, X.-Y. Liu, H. Zheng, X. Feng, and Y. Chen, “Graph spectral regularized tensor completion for traffic data imputation,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 10 996–11 010, 2021.
  20. T. Nie, G. Qin, Y. Wang, and J. Sun, “Correlating sparse sensing for large-scale traffic speed estimation: A laplacian-enhanced low-rank tensor kriging approach,” Transportation Research Part C: Emerging Technologies, vol. 152, p. 104190, 2023.
  21. X. Zhang and M. K. Ng, “Low rank tensor completion with poisson observations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 8, pp. 4239–4251, 2021.
  22. W. Qin, H. Wang, F. Zhang, J. Wang, X. Luo, and T. Huang, “Low-rank high-order tensor completion with applications in visual data,” IEEE Transactions on Image Processing, vol. 31, pp. 2433–2448, 2022.
  23. J. D. Carroll and J.-J. Chang, “Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young” decomposition,” Psychometrika, vol. 35, no. 3, pp. 283–319, 1970.
  24. L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966.
  25. T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009.
  26. I. V. Oseledets, “Tensor-train decomposition,” SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 2295–2317, 2011.
  27. Q. Zhao, G. Zhou, S. Xie, L. Zhang, and A. Cichocki, “Tensor ring decomposition,” arXiv preprint arXiv:1606.05535, 2016.
  28. M. E. Kilmer and C. D. Martin, “Factorization strategies for third-order tensors,” Linear Algebra and its Applications, vol. 435, no. 3, pp. 641–658, 2011.
  29. E. Kernfeld, M. Kilmer, and S. Aeron, “Tensor–tensor products with invertible linear transforms,” Linear Algebra and its Applications, vol. 485, pp. 545–570, 2015.
  30. M. E. Kilmer, K. Braman, N. Hao, and R. C. Hoover, “Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging,” SIAM Journal on Matrix Analysis and Applications, vol. 34, no. 1, pp. 148–172, 2013.
  31. T.-X. Jiang, M. K. Ng, X.-L. Zhao, and T.-Z. Huang, “Framelet representation of tensor nuclear norm for third-order tensor completion,” IEEE Transactions on Image Processing, vol. 29, pp. 7233–7244, 2020.
  32. Q. Gao, P. Zhang, W. Xia, D. Xie, X. Gao, and D. Tao, “Enhanced tensor rpca and its application,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 6, pp. 2133–2140, 2020.
  33. C. Lu, J. Feng, Y. Chen, W. Liu, Z. Lin, and S. Yan, “Tensor robust principal component analysis with a new tensor nuclear norm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 4, pp. 925–938, 2019.
  34. F. Zhang, J. Wang, W. Wang, and C. Xu, “Low-tubal-rank plus sparse tensor recovery with prior subspace information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 10, pp. 3492–3507, 2020.
  35. L. Chen, X. Jiang, X. Liu, and Z. Zhou, “Robust low-rank tensor recovery via nonconvex singular value minimization,” IEEE Transactions on Image Processing, vol. 29, pp. 9044–9059, 2020.
  36. V. Kalofolias, X. Bresson, M. Bronstein, and P. Vandergheynst, “Matrix completion on graphs,” arXiv preprint arXiv:1408.1717, 2014.
  37. N. Rao, H.-F. Yu, P. K. Ravikumar, and I. S. Dhillon, “Collaborative filtering with graph information: Consistency and scalable methods,” Advances in Neural Information Processing Systems, vol. 28, 2015.
  38. A. Elmahdy, J. Ahn, C. Suh, and S. Mohajer, “Matrix completion with hierarchical graph side information,” Advances in Neural Information Processing Systems, vol. 33, pp. 9061–9074, 2020.
  39. S. Dong, P.-A. Absil, and K. Gallivan, “Riemannian gradient descent methods for graph-regularized matrix completion,” Linear Algebra and its Applications, vol. 623, pp. 193–235, 2021.
  40. M. Ashraphijuo and X. Wang, “Fundamental conditions for low-cp-rank tensor completion,” The Journal of Machine Learning Research, vol. 18, no. 1, pp. 2116–2145, 2017.
  41. N. Ghadermarzy, Y. Plan, and Ö. Yilmaz, “Near-optimal sample complexity for convex tensor completion,” Information and Inference: A Journal of the IMA, vol. 8, no. 3, pp. 577–619, 2019.
  42. Q. Zhao, L. Zhang, and A. Cichocki, “Bayesian cp factorization of incomplete tensors with automatic rank determination,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1751–1763, 2015.
  43. J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 208–220, 2012.
  44. B. Romera-Paredes and M. Pontil, “A new convex relaxation for tensor completion,” Advances in Neural Information Processing Systems, vol. 26, 2013.
  45. C. Mu, B. Huang, J. Wright, and D. Goldfarb, “Square deal: Lower bounds and improved relaxations for tensor recovery,” in International Conference on Machine Learning, 2014, pp. 73–81.
  46. M. Ashraphijuo, V. Aggarwal, and X. Wang, “Deterministic and probabilistic conditions for finite completability of low-tucker-rank tensor,” IEEE Transactions on Information Theory, vol. 65, no. 9, pp. 5380–5400, 2019.
  47. D. Xia and M. Yuan, “On polynomial time methods for exact low-rank tensor completion,” Foundations of Computational Mathematics, vol. 19, no. 6, pp. 1265–1313, 2019.
  48. O. Semerci, N. Hao, M. E. Kilmer, and E. L. Miller, “Tensor-based formulation and nuclear norm regularization for multienergy computed tomography,” IEEE Transactions on Image Processing, vol. 23, no. 4, pp. 1678–1693, 2014.
  49. Z. Zhang, G. Ely, S. Aeron, N. Hao, and M. Kilmer, “Novel methods for multilinear data completion and de-noising based on tensor-svd,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3842–3849.
  50. Z. Zhang and S. Aeron, “Exact tensor completion using t-svd,” IEEE Transactions on Signal Processing, vol. 65, no. 6, pp. 1511–1526, 2016.
  51. P. Zhou, C. Lu, Z. Lin, and C. Zhang, “Tensor factorization for low-rank tensor completion,” IEEE Transactions on Image Processing, vol. 27, no. 3, pp. 1152–1163, 2017.
  52. C. Lu, X. Peng, and Y. Wei, “Low-rank tensor completion with a new tensor nuclear norm induced by invertible linear transforms,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5996–6004.
  53. J. Hou, F. Zhang, H. Qiu, J. Wang, Y. Wang, and D. Meng, “Robust low-tubal-rank tensor recovery from binary measurements,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 8, pp. 4355–4373, 2021.
  54. A. Wang, Z. Jin, and G. Tang, “Robust tensor decomposition via t-svd: Near-optimal statistical guarantee and scalable algorithms,” Signal Processing, vol. 167, p. 107319, 2020.
  55. J. A. Bengua, H. N. Phien, H. D. Tuan, and M. N. Do, “Efficient tensor completion for color image and video recovery: Low-rank tensor train,” IEEE Transactions on Image Processing, vol. 26, no. 5, pp. 2466–2479, 2017.
  56. W. Wang, V. Aggarwal, and S. Aeron, “Efficient low rank tensor ring completion,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 5697–5705.
  57. T. Zhou, H. Shan, A. Banerjee, and G. Sapiro, “Kernelized probabilistic matrix factorization: Exploiting graphs and side information,” in Proceedings of the 2012 SIAM International Conference on Data Mining.   SIAM, 2012, pp. 403–414.
  58. J.-M. Yang, Z.-R. Peng, and L. Lin, “Real-time spatiotemporal prediction and imputation of traffic status based on lstm and graph laplacian regularized matrix factorization,” Transportation Research Part C: Emerging Technologies, vol. 129, p. 103228, 2021.
  59. Y. Guan, S. Dong, P.-A. Absil, and F. Glineur, “Alternating minimization algorithms for graph regularized tensor completion,” arXiv preprint arXiv:2008.12876, 2020.
  60. Y. Xu, R. Hao, W. Yin, and Z. Su, “Parallel matrix factorization for low-rank tensor completion,” arXiv preprint arXiv:1312.1254, 2013.
  61. H. Kasai and B. Mishra, “Low-rank tensor completion: a riemannian manifold preconditioning approach,” in International Conference on Machine Learning.   PMLR, 2016, pp. 1012–1021.
  62. X.-Y. Liu, S. Aeron, V. Aggarwal, and X. Wang, “Low-tubal-rank tensor completion using alternating minimization,” IEEE Transactions on Information Theory, vol. 66, no. 3, pp. 1714–1737, 2019.
  63. A. Wang, Z. Lai, and Z. Jin, “Noisy low-tubal-rank tensor completion,” Neurocomputing, vol. 330, pp. 267–279, 2019.
  64. A. Wang and Z. Jin, “Near-optimal noisy low-tubal-rank tensor completion via singular tube thresholding,” in 2017 IEEE International Conference on Data Mining Workshops (ICDMW).   IEEE, 2017, pp. 553–560.
  65. S. Negahban and M. J. Wainwright, “Restricted strong convexity and weighted matrix completion: Optimal bounds with noise,” The Journal of Machine Learning Research, vol. 13, no. 1, pp. 1665–1697, 2012.
  66. O. KLOPP, “Noisy low-rank matrix completion with general sampling distribution,” Bernoulli, pp. 282–303, 2014.
  67. A. Tsybakov, V. Koltchinskii, and K. Lounici, “Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion,” Annals of Statistics, vol. 39, no. 5, pp. 2302–2329, 2011.
  68. P. Zilber and B. Nadler, “Inductive matrix completion: No bad local minima and a fast algorithm,” in International Conference on Machine Learning.   PMLR, 2022, pp. 27 671–27 692.
  69. C. Chen, Z.-B. Wu, Z.-T. Chen, Z.-B. Zheng, and X.-J. Zhang, “Auto-weighted robust low-rank tensor completion via tensor-train,” Information Sciences, vol. 567, pp. 100–115, 2021.
  70. H. Huang, Y. Liu, J. Liu, and C. Zhu, “Provable tensor ring completion,” Signal Processing, vol. 171, p. 107486, 2020.
  71. M. Girvan and M. E. Newman, “Community structure in social and biological networks,” Proceedings of the National Academy of Sciences, vol. 99, no. 12, pp. 7821–7826, 2002.
  72. N. Perraudin, J. Paratte, D. Shuman, L. Martin, V. Kalofolias, P. Vandergheynst, and D. K. Hammond, “Gspbox: A toolbox for signal processing on graphs,” arXiv preprint arXiv:1408.5781, 2014.
  73. D. Shuman, S. Narang, P. Frossard, A. Ortega, and P. Vanderghenyst, “The emmerging field of signal processing on graphs,” IEEE Signal Proc. Magazine, 2013.
  74. D. I. Shuman, B. Ricaud, and P. Vandergheynst, “Vertex-frequency analysis on graphs,” Applied and Computational Harmonic Analysis, vol. 40, no. 2, pp. 260–291, 2016.
  75. B. Yang, Y. Lei, J. Liu, and W. Li, “Social collaborative filtering by trust,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 8, pp. 1633–1647, 2016.
  76. W. Yu and Z. Qin, “Graph convolutional network for recommendation with low-pass collaborative filters,” in International Conference on Machine Learning.   PMLR, 2020, pp. 10 936–10 945.
  77. F. M. Harper and J. A. Konstan, “The movielens datasets: History and context,” Acm Transactions on Interactive Intelligent Systems (TIIS), vol. 5, no. 4, pp. 1–19, 2015.
  78. T. Nie, G. Qin, and J. Sun, “Truncated tensor schatten p-norm based approach for spatiotemporal traffic data imputation with complicated missing patterns,” Transportation Research Part C: Emerging Technologies, vol. 141, p. 103737, 2022.

Summary

We haven't generated a summary for this paper yet.