Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Communicate to Learn at the Edge (2009.13269v1)

Published 28 Sep 2020 in eess.SP, cs.IT, cs.LG, and math.IT

Abstract: Bringing the success of modern ML techniques to mobile devices can enable many new services and businesses, but also poses significant technical and research challenges. Two factors that are critical for the success of ML algorithms are massive amounts of data and processing power, both of which are plentiful, yet highly distributed at the network edge. Moreover, edge devices are connected through bandwidth- and power-limited wireless links that suffer from noise, time-variations, and interference. Information and coding theory have laid the foundations of reliable and efficient communications in the presence of channel imperfections, whose application in modern wireless networks have been a tremendous success. However, there is a clear disconnect between the current coding and communication schemes, and the ML algorithms deployed at the network edge. In this paper, we challenge the current approach that treats these problems separately, and argue for a joint communication and learning paradigm for both the training and inference stages of edge learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Deniz Gunduz (506 papers)
  2. David Burth Kurka (9 papers)
  3. Mikolaj Jankowski (11 papers)
  4. Mohammad Mohammadi Amiri (29 papers)
  5. Emre Ozfatura (33 papers)
  6. Sreejith Sreekumar (17 papers)
Citations (62)