Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed learning of CNNs on heterogeneous CPU/GPU architectures (1712.02546v1)

Published 7 Dec 2017 in cs.DC

Abstract: Convolutional Neural Networks (CNNs) have shown to be powerful classification tools in tasks that range from check reading to medical diagnosis, reaching close to human perception, and in some cases surpassing it. However, the problems to solve are becoming larger and more complex, which translates to larger CNNs, leading to longer training times that not even the adoption of Graphics Processing Units (GPUs) could keep up to. This problem is partially solved by using more processing units and distributed training methods that are offered by several frameworks dedicated to neural network training. However, these techniques do not take full advantage of the possible parallelization offered by CNNs and the cooperative use of heterogeneous devices with different processing capabilities, clock speeds, memory size, among others. This paper presents a new method for the parallel training of CNNs that can be considered as a particular instantiation of model parallelism, where only the convolutional layer is distributed. In fact, the convolutions processed during training (forward and backward propagation included) represent from $60$-$90$\% of global processing time. The paper analyzes the influence of network size, bandwidth, batch size, number of devices, including their processing capabilities, and other parameters. Results show that this technique is capable of diminishing the training time without affecting the classification performance for both CPUs and GPUs. For the CIFAR-10 dataset, using a CNN with two convolutional layers, and $500$ and $1500$ kernels, respectively, best speedups achieve $3.28\times$ using four CPUs and $2.45\times$ with three GPUs. Modern imaging datasets, larger and more complex than CIFAR-10 will certainly require more than $60$-$90$\% of processing time calculating convolutions, and speedups will tend to increase accordingly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Gabriel Falcao (5 papers)
  2. Luís A. Alexandre (35 papers)
  3. Jose Marques (2 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.