Path-metrics, pruning, and generalization (2405.15006v1)
Abstract: Analyzing the behavior of ReLU neural networks often hinges on understanding the relationships between their parameters and the functions they implement. This paper proves a new bound on function distances in terms of the so-called path-metrics of the parameters. Since this bound is intrinsically invariant with respect to the rescaling symmetries of the networks, it sharpens previously known bounds. It is also, to the best of our knowledge, the first bound of its kind that is broadly applicable to modern networks such as ResNets, VGGs, U-nets, and many more. In contexts such as network pruning and quantization, the proposed path-metrics can be efficiently computed using only two forward passes. Besides its intrinsic theoretical interest, the bound yields not only novel theoretical generalization bounds, but also a promising proof of concept for rescaling-invariant pruning.
- Understanding deep neural networks with rectified linear units. Electron. Colloquium Comput. Complex., 24:98, 2017. URL https://eccc.weizmann.ac.il/report/2017/098.
- Francis Bach. Learning from first principles, 2024. URL https://www.di.ens.fr/~fbach/ltfp_book.pdf.
- Francis R. Bach. Breaking the curse of dimensionality with convex neural networks. J. Mach. Learn. Res., 18:19:1–19:53, 2017. URL http://jmlr.org/papers/v18/14-546.html.
- Complexity, statistical risk, and metric entropy of deep nets using total path variation. CoRR, abs/1902.00800, 2019. URL http://arxiv.org/abs/1902.00800.
- Rademacher and Gaussian complexities: Risk bounds and structural results. J. Mach. Learn. Res., 3:463–482, 2002. URL http://jmlr.org/papers/v3/bartlett02a.html.
- Spectrally-normalized margin bounds for neural networks. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6240–6249, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/b22b257ad0519d4500539da3c8bcf4dd-Abstract.html.
- Analysis of the generalization error: Empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of black-scholes partial differential equations. SIAM J. Math. Data Sci., 2(3):631–657, 2020. doi: 10.1137/19M125649X. URL https://doi.org/10.1137/19M125649X.
- Local identifiability of deep relu neural networks: the theory. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/b0ae046e198a5e43141519868a959c74-Abstract-Conference.html.
- Introduction to Algorithms, 3rd Edition. MIT Press, 2009. ISBN 978-0-262-03384-8. URL http://mitpress.mit.edu/books/introduction-algorithms.
- Neural network approximation. Acta Numer., 30:327–444, 2021. doi: 10.1017/S0962492921000052. URL https://doi.org/10.1017/S0962492921000052.
- The Barron space and the flow-induced function spaces for neural network models. Constr. Approx., 55(1):369–406, 2022. ISSN 0176-4276. doi: 10.1007/s00365-021-09549-y. URL https://doi-org.acces.bibliotheque-diderot.fr/10.1007/s00365-021-09549-y.
- The early phase of neural network training. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=Hkl1iRNFwS.
- Pruning neural networks at initialization: Why are we missing the mark? In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=Ig-VyQc-MLK.
- Size-independent sample complexity of neural networks. In Sébastien Bubeck, Vianney Perchet, and Philippe Rigollet, editors, Conference On Learning Theory, COLT 2018, Stockholm, Sweden, 6-9 July 2018, volume 75 of Proceedings of Machine Learning Research, pages 297–299. PMLR, 2018. URL http://proceedings.mlr.press/v75/golowich18a.html.
- Approximation speed of quantized versus unquantized relu neural networks and beyond. IEEE Trans. Inf. Theory, 69(6):3960–3977, 2023. doi: 10.1109/TIT.2023.3240360. URL https://doi.org/10.1109/TIT.2023.3240360.
- A path-norm toolkit for modern networks: consequences, promises and challenges. In International Conference on Learning Representations, ICLR 2024 Spotlight, Vienna, Austria, May 7-11. OpenReview.net, 2024. URL https://openreview.net/pdf?id=hiHZVUIYik.
- Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.90. URL https://doi.org/10.1109/CVPR.2016.90.
- Generalization bounds: Perspectives from information theory and pac-bayes. CoRR, abs/2309.04381, 2023. doi: 10.48550/ARXIV.2309.04381. URL https://doi.org/10.48550/arXiv.2309.04381.
- On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Daphne Koller, Dale Schuurmans, Yoshua Bengio, and Léon Bottou, editors, Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-11, 2008, pages 793–800. Curran Associates, Inc., 2008. URL https://proceedings.neurips.cc/paper/2008/hash/5b69b9cb83065d403869739ae7f0995e-Abstract.html.
- Generalization in deep learning. CoRR, abs/1710.05468, 2017. URL http://arxiv.org/abs/1710.05468.
- Probability in Banach spaces, volume 23 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]. Springer-Verlag, Berlin, 1991. ISBN 3-540-52013-9. doi: 10.1007/978-3-642-20212-4. URL https://doi.org/10.1007/978-3-642-20212-4. Isoperimetry and processes.
- Abide by the law and follow the flow: Conservation laws for gradient flows. CoRR, abs/2307.00144, 2023. doi: 10.48550/arXiv.2307.00144. URL https://doi.org/10.48550/arXiv.2307.00144.
- Andreas Maurer. A vector-contraction inequality for rademacher complexities. In Ronald Ortner, Hans Ulrich Simon, and Sandra Zilles, editors, Algorithmic Learning Theory - 27th International Conference, ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings, volume 9925 of Lecture Notes in Computer Science, pages 3–17, 2016. doi: 10.1007/978-3-319-46379-7\_1. URL https://doi.org/10.1007/978-3-319-46379-7_1.
- Uniform convergence may be unable to explain generalization in deep learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 11611–11622, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/05e97c207235d63ceb1db43c60db7bbb-Abstract.html.
- Norm-based capacity control in neural networks. In Peter Grünwald, Elad Hazan, and Satyen Kale, editors, Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, volume 40 of JMLR Workshop and Conference Proceedings, pages 1376–1401. JMLR.org, 2015. URL http://proceedings.mlr.press/v40/Neyshabur15.html.
- A PAC-Bayesian approach to spectrally-normalized margin bounds for neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=Skz_WfbCZ.
- E. Quemener and M. Corvellec. SIDUS—the Solution for Extreme Deduplication of an Operating System. Linux Journal, 2013.
- Understanding Machine Learning - From Theory to Algorithms. Cambridge University Press, 2014. ISBN 978-1-10-705713-5. URL http://www.cambridge.org/de/academic/subjects/computer-science/pattern-recognition-and-machine-learning/understanding-machine-learning-theory-algorithms.
- An embedding of ReLU networks and an analysis of their identifiability. Constr. Approx., 57(2):853–899, 2023. ISSN 0176-4276,1432-0940. doi: 10.1007/s00365-022-09578-1. URL https://doi.org/10.1007/s00365-022-09578-1.
- Ramon Van Handel. Probability in high dimension. Lecture Notes (Princeton University), 2014. URL https://web.math.princeton.edu/~rvan/APC550.pdf. [Accessed: April 2024].
- Martin J. Wainwright. High-dimensional statistics, volume 48 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, 2019. ISBN 978-1-108-49802-9. doi: 10.1017/9781108627771. URL https://doi-org.acces.bibliotheque-diderot.fr/10.1017/9781108627771. A non-asymptotic viewpoint.
- Understanding deep learning (still) requires rethinking generalization. Commun. ACM, 64(3):107–115, 2021. doi: 10.1145/3446776. URL https://doi.org/10.1145/3446776.