Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless Network Edge (2001.11342v1)

Published 28 Jan 2020 in eess.SP, cs.LG, and stat.ML

Abstract: Mobile edge learning is an emerging technique that enables distributed edge devices to collaborate in training shared machine learning models by exploiting their local data samples and communication and computation resources. To deal with the straggler dilemma issue faced in this technique, this paper proposes a new device to device enabled data sharing approach, in which different edge devices share their data samples among each other over communication links, in order to properly adjust their computation loads for increasing the training speed. Under this setup, we optimize the radio resource allocation for both data sharing and distributed training, with the objective of minimizing the total training delay under fixed numbers of local and global iterations. Numerical results show that the proposed data sharing design significantly reduces the training delay, and also enhances the training accuracy when the data samples are non independent and identically distributed among edge devices.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xiaoran Cai (1 paper)
  2. Xiaopeng Mo (5 papers)
  3. Junyang Chen (28 papers)
  4. Jie Xu (467 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.