Federated Learning Using Coupled Tensor Train Decomposition (2403.02898v1)
Abstract: Coupled tensor decomposition (CTD) can extract joint features from multimodal data in various applications. It can be employed for federated learning networks with data confidentiality. Federated CTD achieves data privacy protection by sharing common features and keeping individual features. However, traditional CTD schemes based on canonical polyadic decomposition (CPD) may suffer from low computational efficiency and heavy communication costs. Inspired by the efficient tensor train decomposition, we propose a coupled tensor train (CTT) decomposition for federated learning. The distributed coupled multi-way data are decomposed into a series of tensor trains with shared factors. In this way, we can extract common features of coupled modes while maintaining the different features of uncoupled modes. Thus the privacy preservation of information across different network nodes can be ensured. The proposed CTT approach is instantiated for two fundamental network structures, namely master-slave and decentralized networks. Experimental results on synthetic and real datasets demonstrate the superiority of the proposed schemes over existing methods in terms of both computational efficiency and communication rounds. In a classification task, experimental results show that the CTT-based federated learning achieves almost the same accuracy performance as that of the centralized counterpart.
- T. Gafni et al., “Federated learning — a signal processing perspective,” IEEE Signal Process. Mag., pp. 14–41, May 2022.
- S. Zhou and G. Y. Li, “Federated learning via inexact admm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
- B. McMahan et al., “Communication-efficient learning of deep networks from decentralized data,” in Proc. AISTATS-2017, Florida, FL, May 2017.
- F. Sattler et al., “Robust and communication-efficient federated learning from non-iid data,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 9, pp. 3400–3413, Sep. 2020.
- S. Wang et al., “Adaptive federated learning in resource constrained edge computing systems,” IEEE J. Sel. Areas Commun., vol. 37, no. 6, pp. 1205–1221, Jun. 2019.
- X. Lian et al., “Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent,” in Proc. NIPS-2017, Long Beach, CA, Dec. 2017.
- S. Lu, Y. Zhang, and Y. Wang, “Decentralized federated learning for electronic health records,” in Proc. CISS-2020, Princeton, NJ, Mar. 2020.
- W. Liu, L. Chen, and W. Zhang, “Decentralized federated learning: Balancing communication and computing costs,” IEEE Trans. Signal Inf. Process. Netw., vol. 8, pp. 131–143, 2022.
- P. Vanhaesebrouck, A. Bellet, and M. Tommasi, “Decentralized collaborative learning of personalized models over networks,” in Proc. AISTATS-2017, Florida, FL, May 2017.
- P. Koniusz, L. Wang, and A. Cherian, “Tensor representations for action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 2, pp. 648–665, 2021.
- J. Wang, D. Ding, Z. Li, X. Feng, C. Cao, and Z. Ma, “Sparse tensor-based multiscale representation for point cloud geometry compression,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
- N. D. Sidiropoulos et al., “Tensor decomposition for signal processing and machine learning,” IEEE Trans. Signal Process., vol. 65, no. 13, pp. 3551–3582, Jul. 2017.
- Y. Gao, G. Zhang, C. Zhang, J. Wang, L. T. Yang, and Y. Zhao, “Federated tensor decomposition-based feature extraction approach for industrial iot,” IEEE Transactions on Industrial Informatics, vol. 17, no. 12, pp. 8541–8549, 2021.
- J. C. Ho et al., “Limestone: High-throughput candidate phenotype generation via tensor factorization,” J. Biomed. Inform., vol. 52, pp. 199–211, 2014.
- A. Cichocki, D. Mandic, L. De Lathauwer, G. Zhou, Q. Zhao, C. Caiafa, and H. A. Phan, “Tensor decompositions for signal processing applications: From two-way to multiway component analysis,” IEEE signal processing magazine, vol. 32, no. 2, pp. 145–163, 2015.
- J.-B. Ong, W.-K. Ng, I. Tjuawinata, C. Li, J. Yang, S. N. Myne, H. Wang, K.-Y. Lam, and C.-C. J. Kuo, “Protecting big data privacy using randomized tensor network decomposition and dispersed tensor computation,” arXiv preprint arXiv:2101.04194, 2021.
- H. Wang, J. Peng, W. Qin, J. Wang, and D. Meng, “Guaranteed tensor recovery fused low-rankness and smoothness,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
- C. Chatzichristos et al., “Coupled tensor decompositions for data fusion,” in Tensors for Data Processing, Y. Liu, Ed. Academic Press, 2022, ch. 10.
- C. Chatzichristos, S. Van Eyndhoven, E. Kofidis, and S. Van Huffel, “Coupled tensor decompositions for data fusion,” in Tensors for data processing. Elsevier, 2022, pp. 341–370.
- H. Guo, W. Bao, K. Qu, X. Ma, and M. Cao, “Multispectral and hyperspectral image fusion based on regularized coupled non-negative block-term tensor decomposition,” Remote Sensing, vol. 14, no. 21, p. 5306, 2022.
- J. C. Ho, J. Ghosh, and J. Sun, “Marble: high-throughput phenotyping from electronic health records via sparse nonnegative tensor factorization,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014, pp. 115–124.
- Y. Jonmohamadi et al., “Extraction of common task features in EEG-fMRI data using coupled tensor-tensor decomposition,” Brain Topography, vol. 33, pp. 636–650, 2020.
- Y. Kim et al., “Federated tensor factorization for computational phenotyping,” in Proc. KDD-2017, Halifax, NS, Canada, Aug. 2017.
- J. Ma et al., “Communication efficient federated generalized tensor factorization for collaborative health data analytics,” in Proc. WWW-2021, Apr. 2021.
- U. Kang et al., “Gigatensor: scaling tensor analysis up by 100 times — algorithms and discoveries,” in Proc. KDD-2012, Beijing, China, Aug. 2012.
- J. H. Choi and S. Vishwanathan, “Dfacto: Distributed factorization of tensors,” in Proc. NIPS-2014, vol. 27, Montreal, QC, Canada, Dec. 2014.
- A. Beutel et al., “Flexifact: Scalable flexible factorization of coupled tensors on hadoop,” in Proc. SDM-2014, Philadelphia, PA, Apr. 2014.
- S. Zhe et al., “Distributed flexible nonlinear tensor factorization,” in Proc. NIPS-2016, vol. 29, Barcelona, Spain, Dec. 2016.
- K. Shin, L. Sael, and U. Kang, “Fully scalable methods for distributed tensor factorization,” IEEE Trans. Knowl. Data Eng., vol. 29, no. 1, pp. 100–113, 2016.
- K. S. Aggour, A. A. T. Gittens, and B. Yener, “Accelerating a distributed CPD algorithm for large dense, skewed tensors,” in Proc. BigData-2018, Seattle, WA, Dec. 2018.
- J. Ma et al., “Privacy-preserving tensor factorization for collaborative health data analysis,” in Proc. CIKM-2019, Beijing, China, Nov. 2019.
- ——, “Communication efficient tensor factorization for decentralized healthcare networks,” in Proc. ICDM-2021, Auckland, New Zeland, Dec. 2021.
- Q. Wang et al., “Tensor decomposition based personalized federated learning,” arXiv:2208.12959v1 [cs.LG], Aug. 2022.
- X. Li, S. Li, Y. Li, Y. Zhou, C. Chen, and Z. Zheng, “A personalized federated tensor factorization framework for distributed iot services qos prediction from heterogeneous data,” IEEE Internet of Things Journal, vol. 9, no. 24, pp. 25 460–25 473, 2022.
- I. V. Oseledets, “Tensor-train decomposition,” SIAM J. Sci. Comput., vol. 33, no. 5, pp. 2295–2317, 2011.
- W. Wang, V. Aggarwal, and S. Aeron, “Principal component analysis with tensor train subspace,” Patt. Recogn. Lett., vol. 122, pp. 86–91, 2019.
- Y. Gao et al., “Multi-domain feature analysis method of MI-EEG signal based on sparse regularity tensor-train decomposition,” Comput. Biol. Med., vol. 158, p. 106887, 2023.
- Y. Zniyed et al., “High-order tensor estimation via trains of coupled third-order CP and Tucker decompositions,” Linear Algebra and its Applications, vol. 588, pp. 304–337, 2020.
- Y. Liu, J. Liu, and C. Zhu, “Low-rank tensor train coefficient array estimation for tensor-on-tensor regression,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 12, pp. 5402–5411, 2020.
- I. Kisil et al., “Accelerating tensor contraction products via tensor-train decomposition,” IEEE Signal Process. Mag., vol. 39, no. 5, pp. 63–70, Sep. 2022.
- X. Mao, Y. Gu, and W. Yin, “Walk proximal gradient: An energy-efficient algorithm for consensus optimization,” IEEE Internet Things J., vol. 6, no. 2, pp. 2048–2060, 2018.
- Y. Jiao and Y. Gu, “Communication-efficient decentralized subspace estimation,” IEEE J. Sel. Topics Signal Process., vol. 16, no. 3, pp. 516–531, Apr. 2022.
- S. Boyd et al., “Randomized gossip algorithms,” IEEE Trans. Inf. Theory, vol. 52, no. 6, pp. 2508–2530, Jun. 2006.
- X. Li, S. Wang, and Y. Cai, “Tutorial: Complexity analysis of singular value decomposition and its variants,” arXiv:1906.12085v3 [math.NA], Oct. 2019.
- A. Paverd, A. Martin, and I. Brown, “Modeling and automatically analysing properties for honest-but-curious adversaries,” University of Oxford, Tech. Rep., 2014. [Online]. Available: https://www.cs.ox.ac.uk/people/andrew.paverd/casper/casper-privacy-report.pdf
- B. W. Bader, T. G. Kolda et al., “Tensor Toolbox for MATLAB, Version 3.2.1,” 2021. [Online]. Available: www.tensortoolbox.org
- A. Koloskova et al., “A unified theory of decentralized SGD with changing topology and local updates,” in Proc. ICML-2020, Jul. 2020.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.