Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Statistics meets elicitability: When fair model validation breaks down (2405.09943v1)

Published 16 May 2024 in math.ST and stat.TH

Abstract: A crucial part of data analysis is the validation of the resulting estimators, in particular, if several competing estimators need to be compared. Whether an estimator can be objectively validated is not a trivial property. If there exists a loss function such that the theoretical risk is minimized by the quantity of interest, this quantity is called elicitable, allowing estimators for this quantity to be objectively validated and compared by evaluating such a loss function. Elicitability requires assumptions on the underlying distributions, often in the form of regularity conditions. Robust Statistics is a discipline that provides estimators in the presence of contaminated data. In this paper, we, introducing the elicitability breakdown point, formally pin down why the problems that contaminated data cause for estimation spill over to validation, letting elicitability fail. Furthermore, as the goal is usually to estimate the quantity of interest w.r.t. the non-contaminated distribution, even modified notions of elicitability may be doomed to fail. The performance of a trimming procedure that filters out instances from non-ideal distributions, which would be theoretically sound, is illustrated in several numerical experiments. Even in simple settings, elicitability however often fails, indicating the necessity to find validation procedures with non-zero elicitability breakdown point.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com