Simulations of estimators for extreme value distributions

On Sunday I blogged the new Stata program I wrote for applying extreme value theory. It includes a novel computation to reduce bias for the generalized extreme value distribution (GEV). To document the efficacy of that correction and the package as a whole, I set my computer to testing it on simulated data. Since Sunday the little Lenovo has run half a billion regressions. I simulated three distributions: the generalized Pareto distribution (GPD), the standard GEV, and the GEV extended to five order statistics instead of one. In all cases, N=50 and μ=ln⁡σ=0. ξ varies across the simulations from –0.5 to +1.0 in increments of 0.1. For each distribution and value of ξ, I ran 5000 Monte Carlo simulations, applying four variants of the estimator: Standard maximum likelihood (ML). ML followed by the Cox-Snell analytical bias correction if ξ^≥–0.2, where ξ^ is the ML estimate. (The correction diverges as ξ→–1/3.) I thus follow Giles, Feng, and Godwin, except that I still make the correction if ξ^≥1. ML followed by a parametric bootstrap bias correction with 100 replications. The same, but with 1000 replications. The graphs below show, for each distribution, the average bias, standard deviation, and root-mean-square error of the estimators for all parameters. Click them to see full-sized versions. In each graph the ξ-labeled horizontal axis is for the true value of that parameter while the μ, ln⁡σ, and ξ–labeled vertical axes are for the various measures of estimator accuracy and stability (mean, SD, RMSE). I reset the pseudorandom number generator to the same state for each value of ξ, creating a kinship between between the 5000 simulated...