Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling On-Device Intelligence at the Network Edge (1908.05895v1)

Published 16 Aug 2019 in cs.IT, cs.LG, cs.NI, eess.SP, and math.IT

Abstract: Devices at the edge of wireless networks are the last mile data sources for ML. As opposed to traditional ready-made public datasets, these user-generated private datasets reflect the freshest local environments in real time. They are thus indispensable for enabling mission-critical intelligent systems, ranging from fog radio access networks (RANs) to driverless cars and e-Health wearables. This article focuses on how to distill high-quality on-device ML models using fog computing, from such user-generated private data dispersed across wirelessly connected devices. To this end, we introduce communication-efficient and privacy-preserving distributed ML frameworks, termed fog ML (FML), wherein on-device ML models are trained by exchanging model parameters, model outputs, and surrogate data. We then present advanced FML frameworks addressing wireless RAN characteristics, limited on-device resources, and imbalanced data distributions. Our study suggests that the full potential of FML can be reached by co-designing communication and distributed ML operations while accounting for heterogeneous hardware specifications, data characteristics, and user requirements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jihong Park (123 papers)
  2. Shiqiang Wang (79 papers)
  3. Anis Elgabli (28 papers)
  4. Seungeun Oh (11 papers)
  5. Eunjeong Jeong (8 papers)
  6. Han Cha (7 papers)
  7. Hyesung Kim (12 papers)
  8. Seong-Lyun Kim (81 papers)
  9. Mehdi Bennis (333 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.