Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 104 tok/s
Gemini 3.0 Pro 36 tok/s Pro
Gemini 2.5 Flash 133 tok/s Pro
Kimi K2 216 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Balancing the exploration-exploitation trade-off in active learning for surrogate model-based reliability analysis via multi-objective optimization (2508.18170v1)

Published 25 Aug 2025 in cs.CE

Abstract: Reliability assessment of engineering systems is often hindered by the need to evaluate limit-state functions through computationally expensive simulations, rendering standard sampling impractical. An effective solution is to approximate the limit-state function with a surrogate model iteratively refined through active learning, thereby reducing the number of expensive simulations. At each iteration, an acquisition strategy selects the next sample by balancing two competing goals: exploration, to reduce global predictive uncertainty, and exploitation, to improve accuracy near the failure boundary. Classical strategies, such as the U-function and the Expected Feasibility Function (EFF), implicitly condense exploration and exploitation into a scalar score derived from the surrogate predictive mean and variance, concealing the trade-off and biasing sampling. We introduce a multi-objective optimization (MOO) formulation for sample acquisition in reliability analysis, where exploration and exploitation are explicit, competing objectives. Within our framework, U and EFF correspond to specific Pareto-optimal solutions, providing a unifying perspective that connects classical and Pareto-based approaches. Solving the MOO problem discards dominated candidates, yielding a compact Pareto set, with samples representing a quantifiable exploration-exploitation trade-off. To select samples from the Pareto set, we adopt the knee point and the compromise solution, and further propose a strategy that adjusts the trade-off according to reliability estimates. Across benchmark limit-state functions, we assess the sample efficiency and active learning performance of all strategies. Results show that U and EFF exhibit case-dependent performance, knee and compromise are generally effective, and the adaptive strategy is robust, consistently reaching strict targets and maintaining relative errors below 0.1%.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: