Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks (2307.04065v1)
Abstract: We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peak at high-performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high-dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer function evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems.
- On the optimization of deep networks: Implicit acceleration by overparameterization. In International Conference on Machine Learning, 244–253. PMLR.
- Global optimization in systems biology: stochastic methods and their applications. In Advances in Systems Biology, 409–424. Springer.
- SGO: A fast engine for ab initio atomic structure global optimization by differential evolution. Computer Physics Communications, 219: 35–44.
- The loss surfaces of multilayer networks. In Artificial intelligence and statistics, 192–204. PMLR.
- Essentially no barriers in neural network energy landscape. In International conference on machine learning, 1309–1318. PMLR.
- Combinatorial optimization in VLSI design. Combinatorial Optimization, 33–96.
- Global optimization of dielectric metasurfaces using a physics-driven neural network. Nano letters, 19(8): 5366–5372.
- Simulator-based training of generative neural networks for the inverse design of metasurfaces. Nanophotonics, 9(5): 1059–1069.
- Exploration versus exploitation in global atomistic structure optimization. The Journal of Physical Chemistry A, 122(5): 1504–1509.
- Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.
- Differential grouping with spectral clustering for large scale global optimization. In 2019 IEEE Congress on Evolutionary Computation (CEC), 334–341. IEEE.
- Benchmark functions for the CEC 2013 special session and competition on large-scale global optimization. gene, 7(33): 8.
- Monte Carlo Gradient Estimation in Machine Learning. J. Mach. Learn. Res., 21(132): 1–62.
- BOA: The Bayesian optimization algorithm. In Proceedings of the genetic and evolutionary computation conference GECCO-99, volume 1, 525–532. Citeseer.
- Inverse design and demonstration of a compact and broadband on-chip wavelength demultiplexer. Nature Photonics, 9(6): 374–377.
- Large-angle, multifunctional metagratings based on freeform multimode geometries. Nano letters, 17(6): 3752–3757.
- Energy optimization in process systems. Elsevier.
- Scalable bayesian optimization using deep neural networks. In International conference on machine learning, 2171–2180. PMLR.
- Decomposition for large-scale optimization problems with overlapping components. In 2019 IEEE congress on evolutionary computation (CEC), 326–333. IEEE.
- Variance reduction properties of the reparameterization trick. In The 22nd International Conference on Artificial Intelligence and Statistics, 2711–2720. PMLR.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.