I had invited Larry to give an update, and I’m delighted that he has! The discussion relates to the last post (by Spanos), which follows upon my deconstruction of Wasserman*. So, for your Saturday night reading pleasure, join me** in reviewing this and the past two blogs and the links within.
“Wasserman on Wasserman: Update! December 28, 2013″
My opinions have shifted a bit.
My reference to Franken’s joke suggested that the usual philosophical debates about the foundations of statistics were un-important, much like the debate about media bias. I was wrong on both counts.
First, I now think Franken was wrong. CNN and network news have a strong liberal bias, especially on economic issues. FOX has an obvious right wing, and anti-atheist bias. (At least FOX has some libertarians on the payroll.) And this does matter. Because people believe what they see on TV and what they read in the NY times. Paul Krugman’s socialist bullshit parading as economics has brainwashed millions of Americans. So media bias is much more than who makes better hummus.
Similarly, the Bayes-Frequentist debate still matters. And people — including many statisticians — are still confused about the distinction. I thought the basic Bayes-Frequentist debate was behind us. A year and a half of blogging (as well as reading other blogs) convinced me I was wrong here too. And this still does matter.
My emphasis on high-dimensional models is germane, however. In our world of high-dimensional, complex models I can’t see how anyone can interpret the output of a Bayesian analysis in any meaningful way.
I wish people were clearer about what Bayes is/is not and what frequentist inference is/is not. Bayes is the analysis of subjective beliefs but provides no frequency guarantees. Frequentist inference is about making procedures that have frequency guarantees but makes no pretense of representing anyone’s beliefs. In the high dimensional world, you have to choose: objective frequency guarantees or subjective beliefs. Choose whichever you prefer, but you can’t have both. I don’t care which one people pick; I just wish they would be clear about what they are giving up when they make their choice.
In your blog, Deborah, you mentioned these papers
by Houman Owhadi, Clint Scovel and Tim Sullivan [Ed: See this post.] And then there is this paper
by Gordon Belot (“Failure of Calibration is Typical”).
These challenges to Bayesian inference remain unanswered in my opinion. In fact, I think Freedman’s Theorem (1965, Annals, p 454) still remains adequately unanswered.
Of course, one can embrace objective Bayesian inference. If this means “Bayesian procedures with good frequentist properties” then I am all for it. But this is just frequentist inference in Bayesian clothing. If it means something else, I’d like to know what.
*I had intended to post a Wasserman update, were I lucky enough to get one, after running the Gelman and Hennig deconstructions from last year, but since Wasserman has surprised me, I’m reversing the order. Two additional related posts are below. I invite the other discussants to share their current reflections and updates, whenever they wish. Normal Deviate is back!
Wasserman on Spanos: http://errorstatistics.com/2012/08/11/u-phil-wasserman-replies-to-spanos-and-hennig/
Wasserman (initial) response to Mayo’s deconstruction: http://errorstatistics.com/2012/08/13/u-phil-concluding-the-deconstruction-wasserman-mayo/
**True, it’s all work here in exile, aside from an occasional visit to the Elbar Room!