Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reducing MSE in estimation of heavy tails: a Bayesian approach (1606.05687v1)

Published 17 Jun 2016 in math.ST and stat.TH

Abstract: Bias reduction in tail estimation has received considerable interest in extreme value analysis. Estimation methods that minimize the bias while keeping the mean squared error (MSE) under control, are especially useful when applying classical methods such as the Hill (1975) estimator. In Caeiro et al. (2005) minimum variance reduced bias estimators of the Pareto tail index were first proposed where the bias is reduced without increasing the variance with respect to the Hill estimator. This method is based on adequate external estimation of a pair of second-order parameters. Here we revisit this problem from a Bayesian point of view starting from the extended Pareto distribution (EPD) approximation to excesses over a high threshold, as developed in Beirlant et al. (2009) using maximum likelihood (ML) estimation. Using asymptotic considerations, we derive an appropriate choice of priors leading to a Bayes estimator for which the MSE curve is a weighted average of the Hill and EPD-ML MSE curves for a large range of thresholds, under the same conditions as in Beirlant et al.(2009). A similar result is obtained for tail probability estimation. Simulations show surprisingly good MSE performance with respect to the existing estimators.

Summary

We haven't generated a summary for this paper yet.