Simulations of estimators for extreme value distributions

On Sunday I blogged the new Stata program I wrote for applying extreme value theory. It includes a novel computation to reduce bias for the generalized extreme value distribution (GEV). To document the efficacy of that correction and the package as a whole, I set my computer to testing it on simulated data. Since Sunday the little Lenovo has run half a billion regressions.

I simulated three distributions: the generalized Pareto distribution (GPD), the standard GEV, and the GEV extended to five order statistics instead of one. In all cases, N=50 and μ=lnσ=0. ξ varies across the simulations from –0.5 to +1.0 in increments of 0.1. For each distribution and value of ξ, I ran 5000 Monte Carlo simulations, applying four variants of the estimator:

  • Standard maximum likelihood (ML).
  • ML followed by the Cox-Snell analytical bias correction if ξ^0.2, where ξ^ is the ML estimate. (The correction diverges as ξ1/3.) I thus follow Giles, Feng, and Godwin, except that I still make the correction if ξ^1.
  • ML followed by a parametric bootstrap bias correction with 100 replications.
  • The same, but with 1000 replications.

The graphs below show, for each distribution, the average bias, standard deviation, and root-mean-square error of the estimators for all parameters. Click them to see full-sized versions. In each graph the ξ-labeled horizontal axis is for the true value of that parameter while the μ, lnσ, and ξ–labeled vertical axes are for the various measures of estimator accuracy and stability (mean, SD, RMSE). I reset the pseudorandom number generator to the same state for each value of ξ, creating a kinship between between the 5000 simulated data sets used for each value. This removes a bit of simulation noise, making the curves below smoother and allowing more confident attribution of trends to the behavior of the estimator.

I have to say, despite the effort I put into deriving the Cox-Snell correction for the GEV–it’s more an algorithm than a formula–bypassing the complex calculus with brute force does pretty darn well. Bootstrapping produces comparable or better bias reduction across the range of values for ξ while gaining only slightly more variance. Going from 100 to 1000 replications adds little value.

Code here. Numerical results here, here, and here.

Simulations of the first three estimators–the fastest ones–with 50000 instead of 5000 runs for each value of ξ produce essentially the same results for those estimators.

Generalized Pareto distribution

gpdsim5000Bias gpdsim5000SD gpdsim5000RMSE

Generalized extreme value distribution

gevsim5000Bias gevsim5000Bias gevsim5000RMSE

Generalized extreme value distribution/5 order statistics

gev5sim5000Bias gev5sim5000Bias gev5sim5000RMSE

print