Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Active Learning for Regression Using $ε$-weighted Hybrid Query Strategy (2206.13298v3)

Published 24 Jun 2022 in cs.LG and cs.AI

Abstract: Designing an inexpensive approximate surrogate model that captures the salient features of an expensive high-fidelity behavior is a prevalent approach in design optimization. In recent times, Deep Learning (DL) models are being used as a promising surrogate computational model for engineering problems. However, the main challenge in creating a DL-based surrogate is to simulate/label a large number of design points, which is time-consuming for computationally costly and/or high-dimensional engineering problems. In the present work, we propose a novel sampling technique by combining the active learning (AL) method with DL. We call this method $\epsilon$-weighted hybrid query strategy ($\epsilon$-HQS) , which focuses on the evaluation of the surrogate at each learning iteration and provides an estimate of the failure probability of the surrogate in the Design Space. By reusing already collected training and test data, the learned failure probability guides the next iteration's sampling process to the region of the high probability of failure. During the empirical evaluation, better accuracy of the surrogate was observed in comparison to other methods of sample selection. We empirically evaluated this method in two different engineering design domains, finite element based static stress analysis of submarine pressure vessel(computationally costly process) and second submarine propeller design( high dimensional problem). https://github.com/vardhah/epsilon_weighted_Hybrid_Query_Strategy

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Harsh Vardhan (21 papers)
  2. Janos Sztipanovits (14 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.