Pretrained Optimization Model for Zero-Shot Black Box Optimization
Abstract: Zero-shot optimization involves optimizing a target task that was not seen during training, aiming to provide the optimal solution without or with minimal adjustments to the optimizer. It is crucial to ensure reliable and robust performance in various applications. Current optimizers often struggle with zero-shot optimization and require intricate hyperparameter tuning to adapt to new tasks. To address this, we propose a Pretrained Optimization Model (POM) that leverages knowledge gained from optimizing diverse tasks, offering efficient solutions to zero-shot optimization through direct application or fine-tuning with few-shot samples. Evaluation on the BBOB benchmark and two robot control tasks demonstrates that POM outperforms state-of-the-art black-box optimization methods, especially for high-dimensional tasks. Fine-tuning POM with a small number of samples and budget yields significant performance improvements. Moreover, POM demonstrates robust generalization across diverse task distributions, dimensions, population sizes, and optimization horizons. For code implementation, see https://github.com/ninja-wm/POM/.
- Multifactorial evolutionary algorithm with online transfer parameter estimation: Mfea-ii. IEEE Transactions on Evolutionary Computation, 24(1):69–83, 2019.
- Openai gym, 2016.
- Evoprompting: Language models for code-level neural architecture search. arXiv preprint arXiv:2302.14838, 2023.
- Measuring the curse of dimensionality and its effects on particle swarm optimization and differential evolution. Applied Intelligence, 42:514–526, 2015.
- Towards learning universal hyperparameter optimizers with transformers. Advances in Neural Information Processing Systems, 35:32053–32068, 2022.
- Differential evolution: A survey of the state-of-the-art. IEEE transactions on evolutionary computation, 15(1):4–31, 2010.
- Recent advances in differential evolution–an updated survey. Swarm and evolutionary computation, 27:1–30, 2016.
- A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation, 6(2):182–197, 2002.
- Real-parameter black-box optimization benchmarking 2009: Presentation of the noiseless functions. Technical report, Citeseer, 2010.
- Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126–1135. PMLR, 2017.
- A parameter study for differential evolution. Advances in intelligent systems, fuzzy systems, evolutionary computation, 10(10):293–298, 2002.
- Meta learning black-box population-based optimizers. arXiv preprint arXiv:2103.03526, 2021.
- Genetic learning particle swarm optimization. IEEE Transactions on Cybernetics, 46(10):2277–2290, 2015.
- Hansen, N. The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772, 2016.
- Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159–195, 2001.
- Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). Evolutionary computation, 11(1):1–18, 2003.
- Coco: A platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software, 36(1):114–144, 2021.
- Holland, J. H. Genetic algorithms. Scientific american, 267(1):66–73, 1992.
- Automated machine learning: methods, systems, challenges. Springer Nature, 2019.
- Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, 33(4):917–963, 2019.
- Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
- Jazzbin. Geatpy: The genetic and evolutionary algorithm toolbox with high performance in python, 2020.
- Particle swarm optimization. In Proceedings of ICNN’95-International Conference on Neural Networks, volume 4, pp. 1942–1948. IEEE, 1995.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Generative pretraining for black-box optimization. arXiv preprint arXiv:2206.10786, 2022.
- Diffusion models for black-box optimization. arXiv preprint arXiv:2306.07180, 2023.
- Discovering attention-based genetic algorithms via meta-black-box optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 929–937, 2023a.
- Discovering evolution strategies via meta-black-box optimization. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation, pp. 29–30, 2023b.
- Evolution through large models. In Handbook of Evolutionary Machine Learning, pp. 331–366. Springer, 2023.
- Algorithm evolution using large language model. arXiv preprint arXiv:2311.15249, 2023a.
- Investigating bi-level optimization for learning and vision from a unified perspective: A survey and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12):10045–10067, 2022. doi: 10.1109/TPAMI.2021.3132674.
- Large language models as evolutionary optimizers. arXiv preprint arXiv:2310.19046, 2023b.
- Eureka: Human-level reward design via coding large language models. arXiv preprint arXiv:2310.12931, 2023.
- Language model crossover: Variation through few-shot prompting. arXiv preprint arXiv:2302.12170, 2023.
- Llmatic: Neural architecture search via large language models and quality-diversity optimization. arXiv preprint arXiv:2306.01102, 2023.
- Recent advances in differential evolution: a survey and experimental analysis. Artificial intelligence review, 33:61–106, 2010.
- Mathematical discoveries from program search with large language models. Nature, pp. 1–3, 2023.
- Real-parameter optimization with differential evolution. In 2005 IEEE congress on evolutionary computation, volume 1, pp. 506–513. IEEE, 2005.
- A simple modification in cma-es achieving linear time and space complexity. In International Conference on Parallel Problem Solving from Nature, pp. 296–305. Springer, 2008.
- Learning step-size adaptation in cma-es. In International Conference on Parallel Problem Solving from Nature, pp. 691–706. Springer, 2020.
- Nl-shade-lbc algorithm with linear parameter adaptation bias change for cec 2022 numerical optimization. In 2022 IEEE Congress on Evolutionary Computation (CEC), pp. 01–08. IEEE, 2022.
- Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 11:341–359, 1997.
- Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567, 2017.
- Improving the search performance of shade using linear population size reduction. In 2014 IEEE Congress on Evolutionary Computation (CEC), pp. 1658–1665, 2014. doi: 10.1109/CEC.2014.6900380.
- A simple adaptive differential evolution algorithm. In 2009 world congress on nature & biologically inspired computing (nabic), pp. 457–462. IEEE, 2009.
- Tvrdık, J. Competitive differential evolution. In Mendel, pp. 7–12, 2006.
- Attention is all you need. Advances in neural information processing systems, 30, 2017.
- No free lunch theorems for optimization. IEEE transactions on evolutionary computation, 1(1):67–82, 1997.
- Decn: Automated evolutionary algorithms via evolution inspired deep convolution network, 2023.
- Understanding and improving layer normalization. Advances in Neural Information Processing Systems, 32, 2019.
- Large language models as optimizers. arXiv preprint arXiv:2309.03409, 2023.
- A distributed framework for ea-based nas. IEEE Transactions on Parallel and Distributed Systems, 32(7):1753–1764, 2021. doi: 10.1109/TPDS.2020.3046774.
- Improving dendritic neuron model with dynamic scale-free network-based differential evolution. IEEE/CAA Journal of automatica sinica, 9(1):99–110, 2021.
- On the relationship between the openai evolution strategy and stochastic gradient descent. arXiv preprint arXiv:1712.06564, 2017.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.