Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mini-batch Serialization: CNN Training with Inter-layer Data Reuse (1810.00307v4)

Published 30 Sep 2018 in cs.LG and cs.AR

Abstract: Training convolutional neural networks (CNNs) requires intense computations and high memory bandwidth. We find that bandwidth today is over-provisioned because most memory accesses in CNN training can be eliminated by rearranging computation to better utilize on-chip buffers and avoid traffic resulting from large per-layer memory footprints. We introduce the MBS CNN training approach that significantly reduces memory traffic by partially serializing mini-batch processing across groups of layers. This optimizes reuse within on-chip buffers and balances both intra-layer and inter-layer reuse. We also introduce the WaveCore CNN training accelerator that effectively trains CNNs in the MBS approach with high functional-unit utilization. Combined, WaveCore and MBS reduce DRAM traffic by 75%, improve performance by 53%, and save 26% system energy for modern deep CNN training compared to conventional training mechanisms and accelerators.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sangkug Lym (7 papers)
  2. Armand Behroozi (1 paper)
  3. Wei Wen (49 papers)
  4. Ge Li (213 papers)
  5. Yongkee Kwon (5 papers)
  6. Mattan Erez (16 papers)
Citations (25)