Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 45 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 206 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks (2307.04065v1)

Published 9 Jul 2023 in cs.LG and math.OC

Abstract: We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peak at high-performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high-dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer function evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. On the optimization of deep networks: Implicit acceleration by overparameterization. In International Conference on Machine Learning, 244–253. PMLR.
  2. Global optimization in systems biology: stochastic methods and their applications. In Advances in Systems Biology, 409–424. Springer.
  3. SGO: A fast engine for ab initio atomic structure global optimization by differential evolution. Computer Physics Communications, 219: 35–44.
  4. The loss surfaces of multilayer networks. In Artificial intelligence and statistics, 192–204. PMLR.
  5. Essentially no barriers in neural network energy landscape. In International conference on machine learning, 1309–1318. PMLR.
  6. Combinatorial optimization in VLSI design. Combinatorial Optimization, 33–96.
  7. Global optimization of dielectric metasurfaces using a physics-driven neural network. Nano letters, 19(8): 5366–5372.
  8. Simulator-based training of generative neural networks for the inverse design of metasurfaces. Nanophotonics, 9(5): 1059–1069.
  9. Exploration versus exploitation in global atomistic structure optimization. The Journal of Physical Chemistry A, 122(5): 1504–1509.
  10. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.
  11. Differential grouping with spectral clustering for large scale global optimization. In 2019 IEEE Congress on Evolutionary Computation (CEC), 334–341. IEEE.
  12. Benchmark functions for the CEC 2013 special session and competition on large-scale global optimization. gene, 7(33): 8.
  13. Monte Carlo Gradient Estimation in Machine Learning. J. Mach. Learn. Res., 21(132): 1–62.
  14. BOA: The Bayesian optimization algorithm. In Proceedings of the genetic and evolutionary computation conference GECCO-99, volume 1, 525–532. Citeseer.
  15. Inverse design and demonstration of a compact and broadband on-chip wavelength demultiplexer. Nature Photonics, 9(6): 374–377.
  16. Large-angle, multifunctional metagratings based on freeform multimode geometries. Nano letters, 17(6): 3752–3757.
  17. Energy optimization in process systems. Elsevier.
  18. Scalable bayesian optimization using deep neural networks. In International conference on machine learning, 2171–2180. PMLR.
  19. Decomposition for large-scale optimization problems with overlapping components. In 2019 IEEE congress on evolutionary computation (CEC), 326–333. IEEE.
  20. Variance reduction properties of the reparameterization trick. In The 22nd International Conference on Artificial Intelligence and Statistics, 2711–2720. PMLR.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.