Ordered Risk Minimization: Learning More from Less Data (2303.09196v2)
Abstract: We consider the worst-case expectation of a permutation invariant ambiguity set of discrete distributions as a proxy-cost for data-driven expected risk minimization. For this framework, we coin the term ordered risk minimization to highlight how results from order statistics inspired the proxy-cost. Specifically, we show how such costs serve as point-wise high-confidence upper bounds of the expected risk. The confidence level can be determined tightly for any sample size. Conversely we also illustrate how to calibrate the size of the ambiguity set such that the high-confidence upper bound has some user specified confidence. This calibration procedure notably supports $\phi$-divergence based ambiguity sets. Numerical experiments then illustrate how the resulting scheme both generalizes better and is less sensitive to tuning parameters compared to the empirical risk minimization approach.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.