Generalized Gradient Descent is a Hypergraph Functor (2403.19845v1)
Abstract: Cartesian reverse derivative categories (CRDCs) provide an axiomatic generalization of the reverse derivative, which allows generalized analogues of classic optimization algorithms such as gradient descent to be applied to a broad class of problems. In this paper, we show that generalized gradient descent with respect to a given CRDC induces a hypergraph functor from a hypergraph category of optimization problems to a hypergraph category of dynamical systems. The domain of this functor consists of objective functions that are 1) general in the sense that they are defined with respect to an arbitrary CRDC, and 2) open in that they are decorated spans that can be composed with other such objective functions via variable sharing. The codomain is specified analogously as a category of general and open dynamical systems for the underlying CRDC. We describe how the hypergraph functor induces a distributed optimization algorithm for arbitrary composite problems specified in the domain. To illustrate the kinds of problems our framework can model, we show that parameter sharing models in multitask learning, a prevalent machine learning paradigm, yield a composite optimization problem for a given choice of CRDC. We then apply the gradient descent functor to this composite problem and describe the resulting distributed gradient descent algorithm for training parameter sharing models.
- John C Baez & Blake S Pollard (2017): A compositional framework for reaction networks. Reviews in Mathematical Physics 29(09), p. 1750028.
- Jonathan Baxter (1997): A Bayesian/information theoretic model of learning to learn via multiple task sampling. Machine learning 28, pp. 7–39.
- Christopher Bishop (2006): Pattern Recognition and Machine Learning. Springer. Available at https://www.microsoft.com/en-us/research/publication/pattern-recognition-machine-learning/.
- Rich Caruana (1997): Multitask learning. Machine learning 28, pp. 41–75.
- arXiv:1910.07065.
- Bob Coecke & Ross Duncan (2007): A graphical calculus for quantum observables. Preprint.
- Bob Coecke & Ross Duncan (2008): Interacting quantum observables. In: International Colloquium on Automata, Languages, and Programming, Springer, pp. 298–310.
- In: European Symposium on Programming, Springer International Publishing Cham, pp. 1–28.
- Brendan Fong (2015): Decorated Cospans. arXiv:1502.00872.
- Brendan Fong (2016): The algebra of open and interconnected systems. arXiv preprint arXiv:1609.05382.
- arXiv preprint arXiv:2403.05711.
- Electronic Proceedings in Theoretical Computer Science 372, pp. 192–206, 10.4204/EPTCS.372.14. Available at http://arxiv.org/abs/2105.12282. ArXiv:2105.12282 [math].
- Seppo Linnainmaa (1970): The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Ph.D. thesis, Master’s Thesis (in Finnish), Univ. Helsinki.
- Makoto Matsumoto & Takuji Nishimura (1998): Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Transactions on Modeling and Computer Simulation (TOMACS) 8(1), pp. 3–30.
- David E Rumelhart, Geoffrey E Hinton & Ronald J Williams (1986): Learning representations by back-propagating errors. nature 323(6088), pp. 533–536.
- Ozan Sener & Vladlen Koltun (2018): Multi-task learning as multi-objective optimization. Advances in neural information processing systems 31.
- Dan Shiebler (2022): Generalized Optimization: A First Step Towards Category Theoretic Learning Theory. In Pandian Vasant, Ivan Zelinka & Gerhard-Wilhelm Weber, editors: Intelligent Computing & Optimization, Springer International Publishing, Cham, pp. 525–535.
- Anders Søgaard & Yoav Goldberg (2016): Deep multi-task learning with low level tasks supervised at lower layers. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 231–235.
- In: International conference on machine learning, PMLR, pp. 1058–1066.
- In: 2022 IEEE International Conference on Multimedia and Expo (ICME), IEEE, pp. 01–06.
- Yu Zhang & Qiang Yang (2021): A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering 34(12), pp. 5586–5609.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.