2000 character limit reached
Leapfrogging for parallelism in deep neural networks (1801.04928v1)
Published 15 Jan 2018 in cs.LG and cs.DC
Abstract: We present a technique, which we term leapfrogging, to parallelize back- propagation in deep neural networks. We show that this technique yields a savings of $1-1/k$ of a dominant term in backpropagation, where k is the number of threads (or gpus).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.