A theoretical framework and some promising findings of grey wolf optimizer, part II: global convergence analysis (2203.07636v1)
Abstract: This paper proposes a theoretical framework of the grey wolf optimizer (GWO) based on several interesting theoretical findings, involving sampling distribution, order-1 and order-2 stability, and global convergence analysis. In the part II of the paper, the global convergence analysis is carried out based on the well-known stagnation assumption for simplification purposes. Firstly, the global convergence property of the GWO under stagnation assumption is abstracted and modelled into two propositions, corresponding to global searching ability analysis and probability-1 global convergence analysis. Then, the global searching ability analysis is carried out. Next, based on a characteristic of the central moments of the new solution of the GWO under stagnation assumption, the probability-1 global convergence property of the GWO under stagnation assumption is proved. Finally, all conclusions are verified by numerical simulations, and the corresponding discussions that the global convergence property can still be guaranteed in the original GWO without stagnation assumption are given.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.