In this paper, we consider parametric density estimation based on minimizing an empirical version of the Havrda-Charv´at-Tsallis ([15], [25]) nonextensive entropy. The resulting estimator, called the Maximum Lq-Likelihood estimator (MLqE), is indexed by a single distortion parameter q, which controls the trade-off between bias and variance. The method has two notable special cases. If q tends to 1, the MLqE is the Maximum Likelihood Estimator (MLE). When q = 1/2, the MLqE is a minimum Hellinger distance type of estimator with the perk of avoiding nonparametric techniques and the difficulties of bandwith selection. The MLqE is studied using asymptotic analysis, simulations and real-world data, showing that it conciliates two apparently contrasting needs: efficiency and robustness, conditional to a proper choice of q. When the sample size is small or moderate, the MLqE trades bias for variance, resulting in a reduced mean squared error compared to the MLE. At the same time, the MLqE exhibits strong robustness at expense of a slightly reduced efficiency in presence of observations discordant with the assumed model. To compute the MLq estimates, a fast and easy-to-implement algorithm based on a reweighting strategy is also described.
Ferrari, D.. "Parametric density estimation by minimizing nonextensive entropy" Working paper, RECENT WORKING PAPER SERIES, Dipartimento di Economia Marco Biagi – Università di Modena e Reggio Emilia, 2008.
Parametric density estimation by minimizing nonextensive entropy
Ferrari, D.
2008
Abstract
In this paper, we consider parametric density estimation based on minimizing an empirical version of the Havrda-Charv´at-Tsallis ([15], [25]) nonextensive entropy. The resulting estimator, called the Maximum Lq-Likelihood estimator (MLqE), is indexed by a single distortion parameter q, which controls the trade-off between bias and variance. The method has two notable special cases. If q tends to 1, the MLqE is the Maximum Likelihood Estimator (MLE). When q = 1/2, the MLqE is a minimum Hellinger distance type of estimator with the perk of avoiding nonparametric techniques and the difficulties of bandwith selection. The MLqE is studied using asymptotic analysis, simulations and real-world data, showing that it conciliates two apparently contrasting needs: efficiency and robustness, conditional to a proper choice of q. When the sample size is small or moderate, the MLqE trades bias for variance, resulting in a reduced mean squared error compared to the MLE. At the same time, the MLqE exhibits strong robustness at expense of a slightly reduced efficiency in presence of observations discordant with the assumed model. To compute the MLq estimates, a fast and easy-to-implement algorithm based on a reweighting strategy is also described.File | Dimensione | Formato | |
---|---|---|---|
RECent-wp16.pdf
Open access
Tipologia:
Versione pubblicata dall'editore
Dimensione
1.5 MB
Formato
Adobe PDF
|
1.5 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris