Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks (2006.03594v3)

Published 7 Jun 2020 in cs.DC, cs.LG, cs.NI, and stat.ML

Abstract: Machine learning (ML) tasks are becoming ubiquitous in today's network applications. Federated learning has emerged recently as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data. There are several challenges with employing conventional federated learning in contemporary networks, due to the significant heterogeneity in compute and communication capabilities that exist across devices. To address this, we advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers. Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity. It considers a multi-layer hybrid learning framework consisting of heterogeneous devices with various proximities. It accounts for the topology structures of the local networks among the heterogeneous nodes at each network layer, orchestrating them for collaborative/cooperative learning through device-to-device (D2D) communications. This migrates from star network topologies used for parameter transfers in federated learning to more distributed topologies at scale. We discuss several open research directions to realizing fog learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Seyyedali Hosseinalipour (83 papers)
  2. Christopher G. Brinton (109 papers)
  3. Vaneet Aggarwal (222 papers)
  4. Huaiyu Dai (102 papers)
  5. Mung Chiang (65 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.