Go beyond End-to-End Training: Boosting Greedy Local Learning with Context Supply (2312.07636v2)
Abstract: Traditional end-to-end (E2E) training of deep networks necessitates storing intermediate activations for back-propagation, resulting in a large memory footprint on GPUs and restricted model parallelization. As an alternative, greedy local learning partitions the network into gradient-isolated modules and trains supervisely based on local preliminary losses, thereby providing asynchronous and parallel training methods that substantially reduce memory cost. However, empirical experiments reveal that as the number of segmentations of the gradient-isolated module increases, the performance of the local learning scheme degrades substantially, severely limiting its expansibility. To avoid this issue, we theoretically analyze the greedy local learning from the standpoint of information theory and propose a ContSup scheme, which incorporates context supply between isolated modules to compensate for information loss. Experiments on benchmark datasets (i.e. CIFAR, SVHN, STL-10) achieve SOTA results and indicate that our proposed method can significantly improve the performance of greedy local learning with minimal memory and computational overhead, allowing for the boost of the number of isolated modules. Our codes are available at https://github.com/Tab-ct/ContSup.
- Emergence of Invariance and Disentanglement in Deep Representations. In 2018 Information Theory and Applications Workshop (ITA), pp. 1–9, San Diego, CA, 2018. IEEE.
- Deep Learning without Weight Transport. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
- Greedy Layerwise Learning Can Scale To ImageNet. In Proceedings of the 36th International Conference on Machine Learning, pp. 583–593. PMLR, 2019.
- Decoupled Greedy Learning of CNNs. In Proceedings of the 37th International Conference on Machine Learning, pp. 736–745. PMLR, 2020.
- Greedy Layer-Wise Training of Deep Networks. In Advances in Neural Information Processing Systems, volume 19. MIT Press, 2006.
- Towards Biologically Plausible Deep Learning. arXiv preprint arXiv:1502.04156, 2016.
- Training Deep Nets with Sublinear Memory Cost. arXiv preprint arXiv:1604.06174, 2016.
- A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the 37th International Conference on Machine Learning, pp. 1597–1607. PMLR, 2020.
- An Analysis of Single-Layer Networks in Unsupervised Feature Learning. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 215–223. JMLR Workshop and Conference Proceedings, 2011.
- Francis Crick. The recent excitement about neural networks. Nature, 337:129–132, 1989.
- Spike Timing-Dependent Plasticity of Neural Circuits. Neuron, 44(1):23–30, 2004.
- Information Bottleneck Theory Based Exploration of Cascade Learning. Entropy, 23(10), 2021.
- Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods. arXiv preprint arXiv:2101.03419, 2022.
- Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks. Frontiers in Neuroscience, 15, 2021.
- The Reversible Residual Network: Backpropagation Without Storing Activations. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
- Supervised Local Training with Backward Links for Deep Neural Networks. IEEE Transactions on Artificial Intelligence, pp. 1–14, 2023.
- Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. IEEE, 2016.
- Momentum Contrast for Unsupervised Visual Representation Learning. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 9729–9738. IEEE, 2020.
- A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 18(7):1527–1554, 2006.
- Learning Deep ResNet Blocks Sequentially using Boosting Theory. In Proceedings of the 35th International Conference on Machine Learning, pp. 2058–2067. PMLR, 2018.
- Densely Connected Convolutional Networks. arXiv preprint arXiv:1608.06993, 2016.
- Decoupled Parallel Backpropagation with Convergence Guarantee. In Proceedings of the 35th International Conference on Machine Learning, pp. 2098–2106. PMLR, 2018.
- Decoupled Neural Interfaces using Synthetic Gradients. In Proceedings of the 34th International Conference on Machine Learning, pp. 1627–1635. PMLR, 2017.
- Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009.
- Putting An End to End-to-End: Gradient-Isolated Learning of Representations. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
- The HSIC Bottleneck: Deep Learning without Back-Propagation. arXiv preprint arXiv:1908.01580, 2019.
- A Theoretical Framework for Target Propagation. In Advances in Neural Information Processing Systems, volume 33, pp. 20024–20036. Curran Associates, Inc., 2020.
- Deep Incremental Boosting. arXiv preprint arXiv:1708.03704, 2017.
- Deep Supervised Learning Using Local Errors. Frontiers in Neuroscience, 12, 2018.
- Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
- Local Learning with Neuron Groups. arXiv preprint arXiv:2301.07635, 2023.
- Going Deeper With Convolutions. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015.
- Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 30, 2017.
- Revisiting Locally Supervised Learning: an Alternative to End-to-end Training. arXiv preprint arXiv:2101.10832, 2021.
- ResNet or DenseNet? Introducing Dense Shortcuts to ResNet. pp. 3550–3559, 2021.
- Gigapixel Whole-Slide Images Classification Using Locally Supervised Learning. In Medical Image Computing and Computer Assisted Intervention, pp. 192–201. Springer Nature Switzerland, 2022.