Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 173 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 187 tok/s Pro
GPT OSS 120B 440 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Conjugate Natural Selection (2208.13898v4)

Published 29 Aug 2022 in cs.LG, cs.NE, and stat.ML

Abstract: We prove that Fisher-Rao natural gradient descent (FR-NGD) optimally approximates the continuous time replicator equation (an essential model of evolutionary dynamics), and term this correspondence "conjugate natural selection". This correspondence promises alternative approaches for evolutionary computation over continuous or high-dimensional hypothesis spaces. As a special case, FR-NGD also provides the optimal approximation of continuous Bayesian inference when hypotheses compete on the basis of predicting actual observations. In this case, the method avoids the need to compute prior probabilities. We demonstrate our findings on a non-convex optimization problem and a system identification task for a stochastic process with time-varying parameters.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998.
  2. Evolutionary computation 1: Basic algorithms and operators. CRC press, 2018.
  3. Evolutionary dynamics of multi-agent learning: A survey. Journal of Artificial Intelligence Research, 53:659–697, 2015.
  4. Gradient flow formulations of discrete and continuous evolutionary models: a unifying perspective. Acta Applicandae Mathematicae, 171(1):1–50, 2021.
  5. Ross Cressman and Yi Tao. The replicator equation and other game dynamics. Proceedings of the National Academy of Sciences, 111(supplement_3):10810–10817, 2014.
  6. Alexander L Fradkov. Early history of machine learning. IFAC-PapersOnLine, 53(2):1385–1390, 2020.
  7. Daniel Friedman. Evolutionary games in economics. Econometrica: journal of the econometric society, pages 637–666, 1991.
  8. Evolutionary games in natural, social, and virtual worlds. Oxford University Press, 2016.
  9. On the properties of the softmax function with application in game theory and reinforcement learning. arXiv preprint arXiv:1704.00805, 2017.
  10. Marc Harper. Information geometry and evolutionary game theory. arXiv preprint arXiv:0911.1383, 2009a.
  11. Marc Harper. The replicator equation as an inference dynamic. arXiv preprint arXiv:0911.1763, 2009b.
  12. Marc Harper. Escort evolutionary game theory. Physica D: Nonlinear Phenomena, 240(18):1411–1415, 2011.
  13. Momentum accelerates evolutionary dynamics. arXiv preprint arXiv:2007.02449, 2020.
  14. Neural replicator dynamics. arXiv preprint arXiv:1906.00190, 2019.
  15. Evolutionary games and population dynamics. Cambridge university press, 1998.
  16. Riemannian natural gradient methods. arXiv preprint arXiv:2207.07287, 2022.
  17. The Bayesian learning rule. arXiv preprint arXiv:2107.04562, 2021.
  18. The weighted majority algorithm. Information and computation, 108(2):212–261, 1994.
  19. Elisabeth Lloyd. Units and levels of selection. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, Spring 2020 edition, 2020.
  20. James Martens. New insights and perspectives on the natural gradient method. The Journal of Machine Learning Research, 21(1):5776–5851, 2020.
  21. Geometry and convergence of natural policy gradient methods. arXiv preprint arXiv:2211.02105, 2022.
  22. Efficient natural gradient descent methods for large-scale optimization problems. arXiv preprint arXiv:2202.06236, 2022.
  23. Information-geometric optimization with natural selection. Entropy, 22(9):967, 2020.
  24. Fishy: Layerwise fisher approximation for higher-order neural network optimization. In Has it Trained Yet? NeurIPS 2022 Workshop, 2022.
  25. David C Queller. Fundamental theorems of evolution. The American Naturalist, 189(4):345–353, 2017.
  26. William H Sandholm. Population games and evolutionary dynamics. MIT press, 2010.
  27. The developmental, physiological, neural, and genetical causes and consequences of frequency-dependent selection in the wild. Annu. Rev. Ecol. Evol. Syst., 37:581–610, 2006.
  28. Charles C Tappert. Frank Rosenblatt, the father of deep learning. Proceedings of Student-Faculty Research Day, CSIS, Pace University, 2020.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.