The Extended Bregman Divergence and Parametric Estimation in Continuous Models
Article Type
Research Article
Publication Title
Sankhya B
Abstract
Under standard regularity conditions, the maximum likelihood estimator (MLE) is the most efficient estimator at the model. However, modern practice recognizes that it is rare for the hypothesized model to hold exactly, and small departures from it are never entirely unexpected. But classical estimators like the MLE are extremely sensitive to the presence of noise in the data. Within the class of robust estimators, which constitutes parametric inference techniques designed to overcome the problems due to model misspecification and noise, minimum distance estimators have become quite popular in recent times. In particular, density-based distances under the umbrella of the Bregman divergence have been demonstrated to have several advantages. Here we will consider an extension of the ordinary Bregman divergence, and investigate the scope of parametric estimation under continuous models using this extended divergence proposal. Many of our illustrations will be based on the GSB divergence, a particular member of the extended Bregman divergence, which appears to hold high promise within the robustness area. To establish the usefulness of the proposed minimum distance estimation procedure, we will provide detailed theoretical investigations followed by substantial numerical verifications.
First Page
333
Last Page
365
DOI
10.1007/s13571-024-00333-z
Publication Date
11-1-2024
Recommended Citation
Basak, Sancharee and Basu, Ayanendranath, "The Extended Bregman Divergence and Parametric Estimation in Continuous Models" (2024). Journal Articles. 5140.
https://digitalcommons.isical.ac.in/journal-articles/5140