Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
4 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Bias Reduction in Sample-Based Optimization (2103.07553v1)

Published 12 Mar 2021 in math.ST, math.OC, and stat.TH

Abstract: We consider stochastic optimization problems which use observed data to estimate essential characteristics of the random quantities involved. Sample average approximation (SAA) or empirical (plug-in) estimation are very popular ways to use data in optimization. It is well known that sample average optimization suffers from downward bias. We propose to use smooth estimators rather than empirical ones in optimization problems. We establish consistency results for the optimal value and the set of optimal solutions of the new problem formulation. The performance of the proposed approach is compared to SAA theoretically and numerically. We analyze the bias of the new problems and identify sufficient conditions for ensuring less biased estimation of the optimal value of the true problem. At the same time, the error of the new estimator remains controlled. We show that those conditions are satisfied for many popular statistical problems such as regression models, classification problems, and optimization problems with Average (Conditional) Value-at-Risk. We have observed that smoothing the least-squares objective in a regression problem by a normal kernel leads to a ridge regression. Our numerical experience shows that the new estimators frequently exhibit also smaller variance and smaller mean-square error than those of SAA.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.