Objective Priors in the Empirical Bayes Framework

30 Nov 2016  ·  Ilja Klebanov, Alexander Sikorski, Christof Schütte, Susanna Röblitz ·

When estimating a probability density within the empirical Bayes framework, the nonparametric maximum likelihood estimate (NPMLE) tends to overfit the data. This issue is often taken care of by regularization -- a penalty term is subtracted from the marginal log-likelihood before the maximization step, so that the estimate favors smooth densities. The majority of penalizations currently in use are rather arbitrary brute-force solutions, which lack invariance under reparametrization. This contradicts the principle that, if the underlying model has several equivalent formulations, the methods of inductive inference should lead to consistent results. Motivated by this principle and following an information-theoretic approach similar to the construction of reference priors, we suggest a penalty term that guarantees this kind of invariance. The resulting density estimate constitutes an extension of reference priors.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper