Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quality-Diversity Algorithms Can Provably Be Helpful for Optimization (2401.10539v2)

Published 19 Jan 2024 in cs.NE

Abstract: Quality-Diversity (QD) algorithms are a new type of Evolutionary Algorithms (EAs), aiming to find a set of high-performing, yet diverse solutions. They have found many successful applications in reinforcement learning and robotics, helping improve the robustness in complex environments. Furthermore, they often empirically find a better overall solution than traditional search algorithms which explicitly search for a single highest-performing solution. However, their theoretical analysis is far behind, leaving many fundamental questions unexplored. In this paper, we try to shed some light on the optimization ability of QD algorithms via rigorous running time analysis. By comparing the popular QD algorithm MAP-Elites with $(\mu+1)$-EA (a typical EA focusing on finding better objective values only), we prove that on two NP-hard problem classes with wide applications, i.e., monotone approximately submodular maximization with a size constraint, and set cover, MAP-Elites can achieve the (asymptotically) optimal polynomial-time approximation ratio, while $(\mu+1)$-EA requires exponential expected time on some instances. This provides theoretical justification for that QD algorithms can be helpful for optimization, and discloses that the simultaneous search for high-performing solutions with diverse behaviors can provide stepping stones to good overall solutions and help avoid local optima.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. A. Auger and B. Doerr. Theory of Randomized Search Heuristics - Foundations and Recent Developments. World Scientific, Singapore, 2011.
  2. Deep surrogate assisted generation of environments. In Advances in Neural Information Processing Systems 35 (NeurIPS), New Orleans, LA, 2022.
  3. J. Bossek and D. Sudholt. Runtime analysis of quality diversity algorithms. In Proceedings of the 25th ACM Conference on Genetic and Evolutionary Computation (GECCO), pages 1546–1554, Lisbon, Portugal, 2023.
  4. Neuroevolution is a competitive alternative to reinforcement learning for skill discovery. In Proceedings of the 11th International Conference on Learning Representations (ICLR), Kigali, Rwanda, 2023.
  5. Quality-diversity optimization: A novel branch of stochastic optimization. In Black Box Optimization, Machine Learning, and No-Free Lunch Theorems, pages 109–135. Springer, Cham, Switzerland, 2021.
  6. V. Chvatal. A greedy heuristic for the set-covering problem. Mathematics of Operations Research, 4(3):233–235, 1979.
  7. The evolutionary origins of modularity. Proceedings of the Royal Society B: Biological Sciences, 280(1755):20122863, 2013.
  8. Scaling MAP-Elites to deep neuroevolution. In Proceedings of the 22th ACM Conference Genetic and Evolutionary Computation (GECCO), pages 67–75, Cancún, Mexico, 2020.
  9. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. In Advances in Neural Information Processing Systems 32 (NeurIPS), pages 5032–5043, Montréal, Canada, 2018.
  10. A. Cully and Y. Demiris. Quality and diversity optimization: A unifying modular framework. IEEE Transactions on Evolutionary Computation, 22(2):245–259, 2018.
  11. Robots that can adapt like animals. Nature, 521(7553):503–507, 2015.
  12. A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection. In Proceedings of the 28th International Conference on Machine Learning (ICML), pages 1057–1064, Bellevue, WA, 2011.
  13. B. Doerr and F. Neumann. Theory of Evolutionary Computation: Recent Developments in Discrete Optimization. Springer, Cham, Switzerland, 2020.
  14. Multiplicative drift analysis. Algorithmica, 64(4):673–697, 2012.
  15. Novelty search: A theoretical perspective. In Proceedings of the 21st ACM Conference on Genetic and Evolutionary Computation (GECCO), pages 99–106, Prague, Czech Republic, 2019.
  16. First return, then explore. Nature, 590(7847):580–586, 2021.
  17. Diversity is all you need: Learning skills without a reward function. In Proceedings of the 6th International Conference on Learning Representations (ICLR), Vancouver, Canada, 2018.
  18. MAP-Elites with descriptor-conditioned gradients and archive distillation into a single policy. In Proceedings of the 25th ACM Conference on Genetic and Evolutionary Computation (GECCO), pages 138–146, Lisbon, Portugal, 2023.
  19. U. Feige. A threshold of ln⁡n𝑛\ln nroman_ln italic_n for approximating set cover. Journal of the ACM, 45(4):634–652, 1998.
  20. Unsupervised feature selection by Pareto optimization. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI), pages 3534–3541, Honolulu, HI, 2019.
  21. M. C. Fontaine and S. Nikolaidis. Differentiable quality diversity. In Advances in Neural Information Processing Systems 34 (NeurIPS), pages 10040–10052, Virtual, 2021.
  22. Illuminating Mario scenes in the latent space of a generative adversarial network. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI), pages 5922–5930, Virtual, 2021.
  23. T. Friedrich and F. Neumann. Maximizing submodular functions under matroid constraints by evolutionary algorithms. Evolutionary Computation, 23(4):543–558, 2015.
  24. Approximating covering problems by randomized search heuristics using multi-objective models. Evolutionary Computation, 18(4):617–633, 2010.
  25. L. Grillotti and A. Cully. Unsupervised behavior discovery with quality-diversity optimization. IEEE Transactions on Evolutionary Computation, 26(6):1539–1552, 2022.
  26. Submodular maximization beyond non-negativity: Guarantees, fast algorithms, and applications. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 2634–2643, Long Beach, CA, 2019.
  27. T. Jansen and I. Wegener. On the utility of populations in evolutionary algorithms. In Proceedings of the 3rd ACM Conference on Genetic and Evolutionary Computation (GECCO), pages 1034–1041, San Francisco, CA, 2001.
  28. Maximizing the spread of influence through a social network. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 137–146, Washington, DC, 2003.
  29. Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research, 9:235–284, 2008.
  30. One solution is not all you need: Few-shot extrapolation via structured MaxEnt RL. In Advances in Neural Information Processing Systems 34 (NeurIPS), pages 8198–8210, Virtual, 2020.
  31. J. Lehman and K. O. Stanley. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary Computation, 19(2):189–223, 2011.
  32. J. Lehman and K. O. Stanley. Evolving a diversity of virtual creatures through novelty search and local competition. In Proceedings of the 13th ACM Conference on Genetic and Evolutionary Computation (GECCO), pages 211–218, Dublin, Ireland, 2011.
  33. J. Lengler and X. Zou. Exponential slowdown for larger populations: The (μ𝜇\muitalic_μ+1)-EA on monotone functions. Theoretical Computer Science, 875:28–51, 2021.
  34. Efficient exploration using model-based quality-diversity with gradients. arXiv:2211.12610, 2022.
  35. Trajectory diversity for zero-shot coordination. In Proceedings of the 38th International Conference on Machine Learning (ICML), pages 7204–7213, Virtual, 2021.
  36. J.-B. Mouret and J. Clune. Illuminating search spaces by mapping elites. arXiv:1504.04909, 2015.
  37. An analysis of approximations for maximizing submodular set functions – I. Mathematical Programming, 14(1):265–294, 1978.
  38. F. Neumann and I. Wegener. Minimum spanning trees made easier via multi-objective optimization. Natural Computing, 3(5):305–319, 2006.
  39. F. Neumann and C. Witt. Bioinspired Computation in Combinatorial Optimization - Algorithms and Their Computational Complexity. Springer, Berlin, Germany, 2010.
  40. On the use of quality diversity algorithms for the traveling thief problem. In Proceedings of the 24th ACM Conference on Genetic and Evolutionary Computation (GECCO), pages 260–268, Boston, MA, 2022.
  41. Analysis of quality diversity algorithms for the knapsack problem. In Proceedings of the 17th International Conference on Parallel Problem Solving from Nature (PPSN), pages 413–427, Dortmund, Germany, 2022.
  42. O. Nilsson and A. Cully. Policy gradient assisted MAP-Elites. In Proceedings of the 23rd ACM Conference on Genetic and Evolutionary Computation (GECCO), page 866–875, Lille, France, 2021.
  43. Effective diversity in population based reinforcement learning. In Advances in Neural Information Processing Systems 34 (NeurIPS), pages 18050–18062, Virtual, 2020.
  44. Diversity policy gradient for sample efficient quality-diversity optimization. In Proceedings of the 24th ACM Conference on Genetic and Evolutionary Computation (GECCO), pages 1075–1083, Boston, MA, 2022.
  45. Quality diversity: A new frontier for evolutionary computation. Frontiers Robotics AI, 3:40, 2016.
  46. On constrained Boolean Pareto optimization. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), pages 389–395, Buenos Aires, Argentina, 2015.
  47. Subset selection by Pareto optimization. In Advances in Neural Information Processing Systems 28 (NeurIPS), pages 1765–1773, Montreal, Canada, 2015.
  48. Parallel Pareto optimization for subset selection. In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI), pages 1939–1945, New York, NY, 2016.
  49. Maximizing submodular or monotone approximately submodular functions by multi-objective evolutionary algorithms. Artificial Intelligence, 275:279–294, 2019.
  50. Analysis of noisy evolutionary optimization when sampling fails. Algorithmica, 83(4):940–975, 2021.
  51. Few-shot quality-diversity optimization. IEEE Robotics and Automation Letters, 7(2):4424–4431, 2022.
  52. Monte Carlo elites: Quality-diversity selection as a multi-armed bandit problem. In Proceedings of the 23rd ACM Conference on Genetic and Evolutionary Computation (GECCO), pages 180–188, Lille, France, 2021.
  53. T. Storch. On the choice of the parent population size. Evolutionary Computation, 16(4):557–578, 2008.
  54. Collaborating with humans without human data. In Advances in Neural Information Processing Systems 34 (NeurIPS), pages 14502–14515, Virtual, 2021.
  55. Approximating gradients for differentiable quality diversity in reinforcement learning. In Proceedings of the 24th ACM Conference on Genetic and Evolutionary Computation (GECCO), page 1102–1111, Boston, MA, 2022.
  56. Using centroidal voronoi tessellations to scale up the multidimensional archive of phenotypic elites algorithm. IEEE Transactions on Evolutionary Computation, 22(4):623–630, 2018.
  57. Evolutionary diversity optimization with clustering-based selection for reinforcement learning. In Proceedings of the 10th International Conference on Learning Representations (ICLR), Virtual, 2022.
  58. Multi-objective optimization-based selection for quality-diversity by non-surrounded-dominated sorting. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI), pages 4335–4343, Macao, SAR, China, 2023.
  59. R P. Wiegand. Preliminary analysis of simple novelty search. Evolutionary Computation, pages 1–25, 2023.
  60. C. Witt. Runtime analysis of the (μ𝜇\muitalic_μ+1) EA on simple pseudo-Boolean functions. Evolutionary Computation, 14(1):65–86, 2006.
  61. Sample-efficient quality-diversity by cooperative coevolution. In Proceedings of the 12th International Conference on Learning Representations (ICLR), Vienna, Austria, 2024.
  62. Robust multi-agent coordination via evolutionary generation of auxiliary adversarial attackers. In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI), pages 11753–11762, Washington, DC, 2023.
  63. Multi-robot coordination and layout design for automated warehousing. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI), pages 5503–5511, Macao, SAR, China, 2023.
  64. Evolutionary Learning: Advances in Theories and Algorithms. Springer, Singapore, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Chao Qian (90 papers)
  2. Ke Xue (28 papers)
  3. Ren-Jian Wang (5 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets