Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradient Scheduling with Global Momentum for Non-IID Data Distributed Asynchronous Training (1902.07848v4)

Published 21 Feb 2019 in cs.DC and cs.LG

Abstract: Distributed asynchronous offline training has received widespread attention in recent years because of its high performance on large-scale data and complex models. As data are distributed from cloud-centric to edge nodes, a big challenge for distributed machine learning systems is how to handle native and natural non-independent and identically distributed (non-IID) data for training. Previous asynchronous training methods do not have a satisfying performance on non-IID data because it would result in that the training process fluctuates greatly which leads to an abnormal convergence. We propose a gradient scheduling algorithm with partly averaged gradients and global momentum (GSGM) for non-IID data distributed asynchronous training. Our key idea is to apply global momentum and local average to the biased gradient after scheduling, in order to make the training process steady. Experimental results show that for non-IID data training under the same experimental conditions, GSGM on popular optimization algorithms can achieve a 20% increase in training stability with a slight improvement in accuracy on Fashion-Mnist and CIFAR-10 datasets. Meanwhile, when expanding distributed scale on CIFAR-100 dataset that results in sparse data distribution, GSGM can perform a 37% improvement on training stability. Moreover, only GSGM can converge well when the number of computing nodes grows to 30, compared to the state-of-the-art distributed asynchronous algorithms. At the same time, GSGM is robust to different degrees of non-IID data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chengjie Li (4 papers)
  2. Ruixuan Li (60 papers)
  3. Haozhao Wang (52 papers)
  4. Yuhua Li (29 papers)
  5. Pan Zhou (220 papers)
  6. Song Guo (138 papers)
  7. Keqin Li (61 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.