Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Overfitting Reduction in Convex Regression (2404.09528v2)

Published 15 Apr 2024 in stat.ME, econ.EM, and stat.AP

Abstract: Convex regression is a method for estimating the convex function from a data set. This method has played an important role in operations research, economics, machine learning, and many other areas. However, it has been empirically observed that convex regression produces inconsistent estimates of convex functions and extremely large subgradients near the boundary as the sample size increases. In this paper, we provide theoretical evidence of this overfitting behavior. To eliminate this behavior, we propose two new estimators by placing a bound on the subgradients of the convex function. We further show that our proposed estimators can reduce overfitting by proving that they converge to the underlying true convex function and that their subgradients converge to the gradient of the underlying function, both uniformly over the domain with probability one as the sample size is increasing to infinity. An application to Finnish electricity distribution firms confirms the superior performance of the proposed methods in predictive power over the existing methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Summary

We haven't generated a summary for this paper yet.