An inverse Sanov theorem for curved exponential families
We prove the Large Deviation Principle for posterior distributions arising from curved exponential families in a parametric setting, allowing misspecification of the model. Then, motivated by the so-called inverse Sanov Theorem, obtained in a nonparametric setting in two papers
by Ganesh and O'Connell at the beginning of this century,
we study the relationship between the rate function for the LDP considered above and the one holding for the corresponding maximum likelihood estimators.
In a parametric setting, even without misspecification, it is not true in general that the rate functions for posterior distributions and for maximum likelihood estimators are Kullback-Leibler divergences with exchanged arguments.
Finally, the results of the paper has some further interest for the case of exponential families with a dual one, a topic recently addressed by Letac. Joint work with Claudio Macci.