Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Data-parallel distributed training of very large models beyond GPU capacity (1811.12174v1)

Published 29 Nov 2018 in cs.DC and cs.LG

Abstract: GPUs have limited memory and it is difficult to train wide and/or deep models that cause the training process to go out of memory. It is shown in this paper how an open source tool called Large Model Support (LMS) can utilize a high bandwidth NVLink connection between CPUs and GPUs to accomplish training of deep convolutional networks. LMS performs tensor swapping between CPU memory and GPU memory such that only a minimal number of tensors required in a training step are kept in the GPU memory. It is also shown how LMS can be combined with an MPI based distributed deep learning module to train models in a data-parallel fashion across multiple GPUs, such that each GPU is utilizing the CPU memory for tensor swapping. The hardware architecture that enables the high bandwidth GPU link with the CPU is discussed as well as the associated set of software tools that are available as the PowerAI package.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Samuel Matzek (2 papers)
  2. Max Grossman (2 papers)
  3. Minsik Cho (36 papers)
  4. Anar Yusifov (1 paper)
  5. Bryant Nelson (1 paper)
  6. Amit Juneja (1 paper)
Citations (3)

Summary

We haven't generated a summary for this paper yet.