Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Federated Optimization at the Edge: D2D-Enabled Learning without Labels (2404.09861v1)

Published 15 Apr 2024 in cs.LG

Abstract: Federated learning (FL) is a popular solution for distributed ML. While FL has traditionally been studied for supervised ML tasks, in many applications, it is impractical to assume availability of labeled data across devices. To this end, we develop Cooperative Federated unsupervised Contrastive Learning ({\tt CF-CL)} to facilitate FL across edge devices with unlabeled datasets. {\tt CF-CL} employs local device cooperation where either explicit (i.e., raw data) or implicit (i.e., embeddings) information is exchanged through device-to-device (D2D) communications to improve local diversity. Specifically, we introduce a \textit{smart information push-pull} methodology for data/embedding exchange tailored to FL settings with either soft or strict data privacy restrictions. Information sharing is conducted through a probabilistic importance sampling technique at receivers leveraging a carefully crafted reserve dataset provided by transmitters. In the implicit case, embedding exchange is further integrated into the local ML training at the devices via a regularization term incorporated into the contrastive loss, augmented with a dynamic contrastive margin to adjust the volume of latent space explored. Numerical evaluations demonstrate that {\tt CF-CL} leads to alignment of latent spaces learned across devices, results in faster and more efficient global model training, and is effective in extreme non-i.i.d. data distribution settings across devices.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. S. Wagle, S. Hosseinalipour, N. Khosravan, M. Chiang, and C. G. Brinton, “Embedding alignment for unsupervised federated learning via smart data exchange,” in IEEE Global Communications Conference, 2022, pp. 492–497.
  2. F. Zantalis, G. Koulouras, S. Karabetsos, and D. Kandris, “A review of machine learning and IoT in smart transportation,” Future Internet, vol. 11, no. 4, 2019.
  3. T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Process. Mag., vol. 37, no. 3, pp. 50–60, 2020.
  4. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artif. Intell. Stat., 2017, pp. 1273–1282.
  5. C. Fan and J. Huang, “Federated few-shot learning with adversarial learning,” in Int. Symp. Model. Opt. Mobile, Ad hoc, Wireless Netw. (WiOpt), 2021, pp. 1–8.
  6. S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “Adaptive federated learning in resource constrained edge computing systems,” IEEE J. Sel. Areas Commun., vol. 37, no. 6, pp. 1205–1221, 2019.
  7. H. Wang, Z. Kaplan, D. Niu, and B. Li, “Optimizing federated learning on non-iid data with reinforcement learning,” in IEEE Conf. Comput. Commun. (INFOCOM), 2020, pp. 1698–1707.
  8. C. Briggs, Z. Fan, and P. Andras, “Federated learning with hierarchical clustering of local updates to improve training on non-IID data,” in Int. Joint Conf. Neural Netw. (IJCNN), 2020, pp. 1–9.
  9. Z. Zhao, C. Feng, W. Hong, J. Jiang, C. Jia, T. Q. Quek, and M. Peng, “Federated learning with non-IID data in wireless networks,” IEEE Trans. Wireless Commun., 2021.
  10. B. van Berlo, A. Saeed, and T. Ozcelebi, “Towards federated unsupervised representation learning,” in ACM Int. WKSHP Edge Syst. Analy. Netw., 2020, pp. 31–36.
  11. F. Zhang, K. Kuang, L. Chen, Z. You, T. Shen, J. Xiao, Y. Zhang, C. Wu, F. Wu, Y. Zhuang et al., “Federated unsupervised representation learning,” Frontiers of Information Technology & Electronic Engineering, vol. 24, no. 8, pp. 1181–1193, 2023.
  12. M. Servetnyk, C. C. Fung, and Z. Han, “Unsupervised federated learning for unbalanced data,” in IEEE Global Commun. Conf., 2020, pp. 1–6.
  13. N. Lu, Z. Wang, X. Li, G. Niu, Q. Dou, and M. Sugiyama, “Federated learning from only unlabeled data with class-conditional-sharing clients,” in Int. Conf. Learn. Represen. (ICLR), 2022.
  14. Q. Li, B. He, and D. Song, “Model-contrastive federated learning,” in IEEE/CVF Conf. Comput. Vision Pattern Rec. (CVPR), 2021, pp. 10 713–10 722.
  15. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Int. Conf. Machine Learn. (ICML), 2020, pp. 1597–1607.
  16. W. Zhuang, Y. Wen, and S. Zhang, “Divergence-aware federated self-supervised learning,” in Int. Conf. on Learning Repr., 2022.
  17. M. Adil, M. K. Khan, M. Jamjoom, and A. Farouk, “Mhadbor: Ai-enabled administrative-distance-based opportunistic load balancing scheme for an agriculture internet of things network,” IEEE Micro, vol. 42, no. 1, pp. 41–50, 2022.
  18. S. Hosseinalipour, S. S. Azam, C. G. Brinton, N. Michelusi, V. Aggarwal, D. J. Love, and H. Dai, “Multi-stage hybrid federated learning over large-scale D2D-enabled fog networks,” IEEE/ACM Trans. Netw., pp. 1–16, 2022.
  19. B. Omoniwa, R. Hussain, M. Adil, A. Shakeel, A. K. Tahir, Q. U. Hasan, and S. A. Malik, “An optimal relay scheme for outage minimization in fog-based internet-of-things (iot) networks,” IEEE IoT J., vol. 6, no. 2, pp. 3044–3054, 2019.
  20. L. Chen, L. Fan, X. Lei, T. Q. Duong, A. Nallanathan, and G. K. Karagiannidis, “Relay-assisted federated edge learning: Performance analysis and system optimization,” IEEE Trans. on Commun., vol. 71, no. 6, pp. 3387–3401, 2023.
  21. S. Wang, S. Hosseinalipour, V. Aggarwal, C. G. Brinton, D. J. Love, W. Su, and M. Chiang, “Towards cooperative federated learning over heterogeneous edge/fog networks,” IEEE Communications Magazine, 2023.
  22. K. Kumar, J. Liu, Y.-H. Lu, and B. Bhargava, “A survey of computation offloading for mobile systems,” Mobile Netw. App., vol. 18, no. 1, pp. 129–140, 2013.
  23. S. Wan, X. Li, Y. Xue, W. Lin, and X. Xu, “Efficient computation offloading for internet of vehicles in edge computing-assisted 5G networks,” J. Supercomputing, vol. 76, no. 4, pp. 2518–2547, 2020.
  24. M. Min, L. Xiao, Y. Chen, P. Cheng, D. Wu, and W. Zhuang, “Learning-based computation offloading for iot devices with energy harvesting,” IEEE Trans. Veh. Tech., vol. 68, no. 2, pp. 1930–1941, 2019.
  25. C. I. Bercea, B. Wiestler, D. Rueckert, and S. Albarqouni, “Feddis: Disentangled federated learning for unsupervised brain pathology segmentation,” arXiv:2103.03705, 2021.
  26. D. Bui, K. Malik, J. Goetz, H. Liu, S. Moon, A. Kumar, and K. G. Shin, “Federated user representation learning,” arXiv:1909.12535, 2019.
  27. M. Liu, S. Ho, M. Wang, L. Gao, Y. Jin, and H. Zhang, “Federated learning meets natural language processing: A survey,” arXiv preprint arXiv:2107.12603, 2021.
  28. S. S. Azam, T. Kim, S. Hosseinalipour, C. Joe-Wong, S. Bagchi, and C. Brinton, “Can we generalize and distribute private representation learning?” in Int. Conf. Artif. Intell. Stat., 2022, pp. 11 320–11 340.
  29. S. Wang, M. Lee, S. Hosseinalipour, R. Morabito, M. Chiang, and C. G. Brinton, “Device sampling for heterogeneous federated learning: Theory, algorithms, and implementation,” in IEEE Conf. Comput. Commun. (INFOCOM), 2021, pp. 1–10.
  30. Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-iid data,” arXiv preprint arXiv:1806.00582, 2018.
  31. S. Hosseinalipour, C. G. Brinton, V. Aggarwal, H. Dai, and M. Chiang, “From federated to fog learning: Distributed machine learning over heterogeneous wireless networks,” IEEE Commun. Mag., vol. 58, no. 12, pp. 41–47, 2020.
  32. P. Corke, T. Wark, R. Jurdak, W. Hu, P. Valencia, and D. Moore, “Environmental wireless sensor networks,” Proceedings of the IEEE, vol. 98, no. 11, pp. 1903–1917, 2010.
  33. Z. Ji, L. Chen, N. Zhao, Y. Chen, G. Wei, and F. R. Yu, “Computation offloading for edge-assisted federated learning,” IEEE Trans. on Veh. Tech., vol. 70, no. 9, pp. 9330–9344, 2021.
  34. Y. Khaluf, E. Mathews, and F. J. Rammig, “Self-organized cooperation in swarm robotics,” in 2011 14th IEEE Int. Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops, 2011, pp. 217–226.
  35. M. Ali, F. Naeem, M. Tariq, and G. Kaddoum, “Federated learning for privacy preservation in smart healthcare systems: A comprehensive survey,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 2, pp. 778–789, 2023.
  36. I. Bistritz, A. J. Mann, and N. Bambos, “Distributed distillation for on-device learning,” in Int. Conf. Neural Inf. Process. Syst., 2020.
  37. T. Lin, L. Kong, S. U. Stich, and M. Jaggi, “Ensemble distillation for robust model fusion in federated learning,” in Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 33, 2020.
  38. D. Li and J. Wang, “Fedmd: Heterogenous federated learning via model distillation,” arXiv:1910.03581, 2019.
  39. Z. Long, J. Wang, Y. Wang, H. Xiao, and F. Ma, “Fedcon: A contrastive framework for federated semi-supervised learning,” arXiv:2109.04533, 2021.
  40. D. Csiba and P. Richtárik, “Importance sampling for minibatches,” J. Machine Learn. Res., vol. 19, no. 1, pp. 962–982, 2018.
  41. P. Zhao and T. Zhang, “Stochastic optimization with importance sampling for regularized loss minimization,” in Int. Conf. on machine learning.   PMLR, 2015, pp. 1–9.
  42. X. Dong and J. Shen, “Triplet loss in siamese network for object tracking,” in Eur. Conf. Comput. Vision (ECCV), 2018, pp. 459–474.
  43. E. Rizk, S. Vlaski, and A. H. Sayed, “Optimal importance sampling for federated learning,” in IEEE Int. Conf. Acous. Speech Signal Proc. (ICASSP), 2021, pp. 3095–3099.
  44. F. P.-C. Lin, S. Hosseinalipour, S. S. Azam, C. G. Brinton, and N. Michelusi, “Semi-decentralized federated learning with cooperative D2D local model aggregations,” IEEE J. Sel. Areas Commun., vol. 39, no. 12, pp. 3851–3869, 2021.
  45. D. Arthur and S. Vassilvitskii, “K-means++: The advantages of careful seeding,” in ACM-SIAM Symp. Disc. Alg., 2007, pp. 1027–1035.
  46. S. Wang, M. Lee, S. Hosseinalipour, R. Morabito, M. Chiang, and C. G. Brinton, “Device sampling for heterogeneous federated learning: Theory, algorithms, and implementation,” in IEEE Conf. on Computer Communications, 2021, pp. 1–10.
  47. S. Hosseinalipour, S. Wang, N. Michelusi, V. Aggarwal, C. Brinton, D. Love, and M. Chiang, “Parallel successive learning for dynamic distributed model training over heterogeneous wireless networks,” IEEE/ACM Transactions on Networking, pp. 1–16, 2023, publisher Copyright: IEEE.
  48. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Machine Learn. Sys. (MLSys), vol. 2, pp. 429–450, 2020.
  49. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms.” [Online]. Available: http://arxiv.org/abs/1708.07747
  50. J. J. Hull, “A database for handwritten text recognition research,” IEEE Trans. on Pattern Analysis and Machine Intell., vol. 16, no. 5, pp. 550–554, 1994.
  51. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” 2011.
  52. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  53. C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” J. of big data, vol. 6, no. 1, pp. 1–48, 2019.
  54. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conf. on computer vision and pattern recognition, 2016, pp. 770–778.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Satyavrat Wagle (8 papers)
  2. Seyyedali Hosseinalipour (83 papers)
  3. Naji Khosravan (19 papers)
  4. Christopher G. Brinton (109 papers)
Citations (1)