Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parallel Training of Deep Networks with Local Updates (2012.03837v2)

Published 7 Dec 2020 in cs.LG, cs.AI, and cs.NE

Abstract: Deep learning models trained on large data sets have been widely successful in both vision and language domains. As state-of-the-art deep learning architectures have continued to grow in parameter count so have the compute budgets and times required to train them, increasing the need for compute-efficient methods that parallelize training. Two common approaches to parallelize the training of deep networks have been data and model parallelism. While useful, data and model parallelism suffer from diminishing returns in terms of compute efficiency for large batch sizes. In this paper, we investigate how to continue scaling compute efficiently beyond the point of diminishing returns for large batches through local parallelism, a framework which parallelizes training of individual layers in deep networks by replacing global backpropagation with truncated layer-wise backpropagation. Local parallelism enables fully asynchronous layer-wise parallelism with a low memory footprint, and requires little communication overhead compared with model parallelism. We show results in both vision and language domains across a diverse set of architectures, and find that local parallelism is particularly effective in the high-compute regime.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Michael Laskin (20 papers)
  2. Luke Metz (33 papers)
  3. Seth Nabarro (5 papers)
  4. Mark Saroufim (4 papers)
  5. Badreddine Noune (3 papers)
  6. Carlo Luschi (18 papers)
  7. Jascha Sohl-Dickstein (88 papers)
  8. Pieter Abbeel (372 papers)
Citations (26)