Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Learning Across Data Silos (2301.06662v4)

Published 17 Jan 2023 in cs.LG, cs.CR, and eess.SP

Abstract: We consider the problem of inferring graph topology from smooth graph signals in a novel but practical scenario where data are located in distributed clients and prohibited from leaving local clients due to factors such as privacy concerns. The main difficulty in this task is how to exploit the potentially heterogeneous data of all clients under data silos. To this end, we first propose an auto-weighted multiple graph learning model to jointly learn a personalized graph for each local client and a single consensus graph for all clients. The personalized graphs match local data distributions, thereby mitigating data heterogeneity, while the consensus graph captures the global information. Moreover, the model can automatically assign appropriate contribution weights to local graphs based on their similarity to the consensus graph. We next devise a tailored algorithm to solve the induced problem, where all raw data are processed locally without leaving clients. Theoretically, we establish a provable estimation error bound and convergence analysis for the proposed model and algorithm. Finally, extensive experiments on synthetic and real data are carried out, and the results illustrate that our approach can learn graphs effectively in the target scenario.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. X. Dong, D. Thanou, M. Rabbat, and P. Frossard, “Learning graphs from data: A signal representation perspective,” IEEE Signal Process. Mag., vol. 36, no. 3, pp. 44–63, 2019.
  2. U. Von Luxburg, “A tutorial on spectral clustering,” Statistics and computing, vol. 17, pp. 395–416, 2007.
  3. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 1, pp. 4–24, 2020.
  4. G. Mateos, S. Segarra, A. G. Marques, and A. Ribeiro, “Connecting the dots: Identifying network structure via graph signal processing,” IEEE Signal Process. Mag., vol. 36, no. 3, pp. 16–43, 2019.
  5. A. Ortega, P. Frossard, J. Kovačević, J. M. Moura, and P. Vandergheynst, “Graph signal processing: Overview, challenges, and applications,” Proc. IEEE, vol. 106, no. 5, pp. 808–828, 2018.
  6. X. Dong, D. Thanou, P. Frossard, and P. Vandergheynst, “Learning Laplacian matrix in smooth graph signal representations,” IEEE Trans. Signal Process., vol. 64, no. 23, pp. 6160–6173, 2016.
  7. V. Kalofolias, “How to learn a graph from smooth signals,” in Proc. Int. Conf. Artif. Intell. Stat., AISTATS.   PMLR, 2016, pp. 920–929.
  8. X. Pu, T. Cao, X. Zhang, X. Dong, and S. Chen, “Learning to learn graph topologies,” Proc. Adv. Neural Inf. Process. Syst., vol. 34, pp. 4249–4262, 2021.
  9. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., “Advances and open problems in federated learning,” arXiv preprint arXiv:1912.04977, 2019.
  10. N. Rieke, J. Hancox, W. Li, F. Milletari, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein et al., “The future of digital health with federated learning,” NPJ Digit. Med, vol. 3, no. 1, pp. 1–7, 2020.
  11. T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Process. Mag., vol. 37, no. 3, pp. 50–60, 2020.
  12. T. Yang, X. Yi, J. Wu, Y. Yuan, D. Wu, Z. Meng, Y. Hong, H. Wang, Z. Lin, and K. H. Johansson, “A survey of distributed optimization,” Annu. Rev. Control, vol. 47, pp. 278–305, 2019.
  13. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., “Advances and open problems in federated learning,” Found. Trends Mach. Learn., vol. 14, no. 1–2, pp. 1–210, 2021.
  14. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Int. Conf. Artif. Intell. Stat., AISTATS.   PMLR, 2017, pp. 1273–1282.
  15. A. Z. Tan, H. Yu, L. Cui, and Q. Yang, “Towards personalized federated learning,” IEEE Trans. Neural Netw. Learn. Syst., 2022.
  16. Y. Mansour, M. Mohri, J. Ro, and A. T. Suresh, “Three approaches for personalization with applications to federated learning,” arXiv preprint arXiv:2002.10619, 2020.
  17. V. Smith, C.-K. Chiang, M. Sanjabi, and A. S. Talwalkar, “Federated multi-task learning,” Proc. Adv. Neural Inf. Process. Syst., vol. 30, 2017.
  18. C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proc. Int. Conf. Mach. Learn.   PMLR, 2017, pp. 1126–1135.
  19. P. Danaher, P. Wang, and D. M. Witten, “The joint graphical lasso for inverse covariance estimation across multiple classes,” J. R. Stat. Soc. B., vol. 76, no. 2, pp. 373–397, 2014.
  20. P. J. Bickel and E. Levina, “Regularized estimation of large covariance matrices,” Ann. Statist, vol. 36, no. 1, p. 199–227, 2008.
  21. Y. Yuan, D. W. Soh, X. Yang, K. Guo, and T. Q. Quek, “Joint network topology inference via structured fusion regularization,” arXiv preprint arXiv:2103.03471, 2021.
  22. X. Zhang and Q. Wang, “Time-varying graph learning under structured temporal priors,” in Proc. Eur. Signal Process. Conf.   IEEE, 2022, pp. 2141–2145.
  23. K. Yamada, Y. Tanaka, and A. Ortega, “Time-varying graph learning based on sparseness of temporal variation,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process.   IEEE, 2019, pp. 5411–5415.
  24. X. Zhang and Q. Wang, “A graph-assisted framework for multiple graph learning,” IEEE Trans. Signal. Inf. Process. Netw., pp. 1–16, 2024.
  25. C. T Dinh, N. Tran, and J. Nguyen, “Personalized federated learning with moreau envelopes,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 394–21 405, 2020.
  26. A. Bellet, R. Guerraoui, M. Taziki, and M. Tommasi, “Personalized and private peer-to-peer machine learning,” in Proc. Int. Conf. Artif. Intell. Stat., AISTATS.   PMLR, 2018, pp. 473–481.
  27. O. Marfoq, G. Neglia, A. Bellet, L. Kameni, and R. Vidal, “Federated multi-task learning under a mixture of distributions,” Advances in Neural Information Processing Systems, vol. 34, pp. 15 434–15 447, 2021.
  28. F. Chen, G. Long, Z. Wu, T. Zhou, and J. Jiang, “Personalized federated learning with graph,” arXiv preprint arXiv:2203.00829, 2022.
  29. L. Stanković, M. Daković, and E. Sejdić, “Introduction to graph signal processing,” in Vertex-Frequency Analysis of Graph Signals.   Springer, 2019, pp. 3–108.
  30. Z. Hu, F. Nie, W. Chang, S. Hao, R. Wang, and X. Li, “Multi-view spectral clustering via sparse graph learning,” Neurocomputing, vol. 384, pp. 1–10, 2020.
  31. F. Nie, J. Li, X. Li et al., “Self-weighted multiview clustering with multiple graphs.” in Int. Joint Conf. Artif. Intell., 2017, pp. 2564–2570.
  32. S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu, “A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers,” Stat. Sci., 2012.
  33. X. Yang, M. Sheng, Y. Yuan, and T. Q. Quek, “Network topology inference from heterogeneous incomplete graph signals,” IEEE Trans. Signal Process., vol. 69, pp. 314–327, 2020.
  34. S. Hara and T. Washio, “Learning a common substructure of multiple graphical gaussian models,” Neur. Netw., vol. 38, pp. 23–38, 2013.
  35. W. Lee and Y. Liu, “Joint estimation of multiple precision matrices with common structures,” J. Mach. Learn. Res., vol. 16, no. 1, pp. 1035–1062, 2015.
  36. A. Karaaslanli, S. Saha, S. Aviyente, and T. Maiti, “Multiview graph learning for single-cell rna sequencing data,” bioRxiv, 2021.
  37. A. Karaaslanli and S. Aviyente, “Multiview graph learning with consensus graph,” arXiv preprint arXiv:2401.13769, 2024.
  38. R. Wu, A. Scaglione, H.-T. Wai, N. Karakoc, K. Hreinsson, and W.-K. Ma, “Federated block coordinate descent scheme for learning global and personalized models,” in Proc. Natl. Conf. Artif. Intell., vol. 35, no. 12, 2021, pp. 10 355–10 362.
  39. L. Zhang, S. Lu, and Z.-H. Zhou, “Adaptive online learning in dynamic environments,” Proc. Adv. Neural Inf. Process. Syst., vol. 31, 2018.
  40. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Secur. Privacy (SP).   IEEE, 2017, pp. 3–18.
  41. M. Al-Rubaie and J. M. Chang, “Reconstruction attacks against mobile-based continuous authentication systems in the cloud,” IEEE Trans. Inf. Forensics Security, vol. 11, no. 12, pp. 2648–2663, 2016.
  42. C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Proc. Theory of Cryptography Conf.   Springer, 2006, pp. 265–284.
  43. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Mach. Learn. and Sys., vol. 2, pp. 429–450, 2020.
  44. K. Liu, S. Hu, S. Z. Wu, and V. Smith, “On privacy and personalization in cross-silo federated learning,” Proc. Adv. Neural Inf. Process. Syst., vol. 35, pp. 5925–5940, 2022.
  45. T. Li, S. Hu, A. Beirami, and V. Smith, “Ditto: Fair and robust federated learning through personalization,” in Proc. Int. Conf. Mach. Learn.   PMLR, 2021, pp. 6357–6368.
  46. S. Fortunato, “Community detection in graphs,” Phys. Rep., vol. 486, no. 3-5, pp. 75–174, 2010.
  47. R. K. Kana, L. E. Libero, and M. S. Moore, “Disrupted cortical connectivity theory as an explanatory model for autism spectrum disorders,” Phys. Life Rev, vol. 8, no. 4, pp. 410–437, 2011.
  48. S. Kumar, J. Ying, J. V. de Miranda Cardoso, and D. P. Palomar, “A unified framework for structured graph learning via spectral constraints.” J. Mach. Learn. Res., vol. 21, no. 22, pp. 1–60, 2020.
  49. P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu, “High-dimensional covariance estimation by minimizing l1-penalized log-determinant divergence,” Electron. J. Statist., vol. 5, p. 935–980, 2011.
  50. S. S. Saboksayr, G. Mateos, and M. Cetin, “Online discriminative graph learning from multi-class smooth signals,” Signal Process., vol. 186, p. 108101, 2021.

Summary

We haven't generated a summary for this paper yet.