Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Incremental Recursive Ranking Grouping for Large Scale Global Optimization (2206.04168v2)

Published 8 Jun 2022 in cs.NE

Abstract: Real-world optimization problems may have a different underlying structure. In black-box optimization, the dependencies between decision variables remain unknown. However, some techniques can discover such interactions accurately. In Large Scale Global Optimization (LSGO), problems are high-dimensional. It was shown effective to decompose LSGO problems into subproblems and optimize them separately. The effectiveness of such approaches may be highly dependent on the accuracy of problem decomposition. Many state-of-the-art decomposition strategies are derived from Differential Grouping (DG). However, if a given problem consists of non-additively separable subproblems, DG-based strategies may discover many non-existing interactions. On the other hand, monotonicity checking strategies proposed so far do not report non-existing interactions for any separable subproblems but may miss discovering many of the existing ones. Therefore, we propose Incremental Recursive Ranking Grouping (IRRG) that suffers from none of these flaws. IRRG consumes more fitness function evaluations than the recent DG-based propositions, e.g., Recursive DG 3 (RDG3). Nevertheless, the effectiveness of the considered Cooperative Co-evolution frameworks after embedding IRRG or RDG3 was similar for problems with additively separable subproblems that are suitable for RDG3. After replacing the additive separability with non-additive, embedding IRRG leads to results of significantly higher quality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
Citations (8)

Summary

We haven't generated a summary for this paper yet.