Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LAMBDA: Covering the Solution Set of Black-Box Inequality by Search Space Quantization (2203.13708v1)

Published 25 Mar 2022 in cs.LG, cs.AI, cs.RO, and math.OC

Abstract: Black-box functions are broadly used to model complex problems that provide no explicit information but the input and output. Despite existing studies of black-box function optimization, the solution set satisfying an inequality with a black-box function plays a more significant role than only one optimum in many practical situations. Covering as much as possible of the solution set through limited evaluations to the black-box objective function is defined as the Black-Box Coverage (BBC) problem in this paper. We formalized this problem in a sample-based search paradigm and constructed a coverage criterion with Confusion Matrix Analysis. Further, we propose LAMBDA (Latent-Action Monte-Carlo Beam Search with Density Adaption) to solve BBC problems. LAMBDA can focus around the solution set quickly by recursively partitioning the search space into accepted and rejected sub-spaces. Compared with La-MCTS, LAMBDA introduces density information to overcome the sampling bias of optimization and obtain more exploration. Benchmarking shows, LAMBDA achieved state-of-the-art performance among all baselines and was at most 33x faster to get 95% coverage than Random Search. Experiments also demonstrate that LAMBDA has a promising future in the verification of autonomous systems in virtual tests.

Summary

We haven't generated a summary for this paper yet.