Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gossip training for deep learning (1611.09726v1)

Published 29 Nov 2016 in cs.CV, cs.LG, and stat.ML

Abstract: We address the issue of speeding up the training of convolutional networks. Here we study a distributed method adapted to stochastic gradient descent (SGD). The parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way to share information between different threads inspired by gossip algorithms and showing good consensus convergence properties. Our method called GoSGD has the advantage to be fully asynchronous and decentralized. We compared our method to the recent EASGD in \cite{elastic} on CIFAR-10 show encouraging results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Michael Blot (7 papers)
  2. David Picard (44 papers)
  3. Matthieu Cord (129 papers)
  4. Nicolas Thome (53 papers)
Citations (107)

Summary

We haven't generated a summary for this paper yet.