I had invited Larry to give an update, and I’m delighted that he has! The discussion relates to the last post (by Spanos), which follows upon my deconstruction of Wasserman*. So, for your Saturday night reading pleasure, join me** in reviewing this and the past two blogs and the links within.
“Wasserman on Wasserman: Update! December 28, 2013”
My opinions have shifted a bit.
My reference to Franken’s joke suggested that the usual philosophical debates about the foundations of statistics were un-important, much like the debate about media bias. I was wrong on both counts.
First, I now think Franken was wrong. CNN and network news have a strong liberal bias, especially on economic issues. FOX has an obvious right wing, and anti-atheist bias. (At least FOX has some libertarians on the payroll.) And this does matter. Because people believe what they see on TV and what they read in the NY times. Paul Krugman’s socialist bullshit parading as economics has brainwashed millions of Americans. So media bias is much more than who makes better hummus.
Similarly, the Bayes-Frequentist debate still matters. And people — including many statisticians — are still confused about the distinction. I thought the basic Bayes-Frequentist debate was behind us. A year and a half of blogging (as well as reading other blogs) convinced me I was wrong here too. And this still does matter.
My emphasis on high-dimensional models is germane, however. In our world of high-dimensional, complex models I can’t see how anyone can interpret the output of a Bayesian analysis in any meaningful way.
I wish people were clearer about what Bayes is/is not and what frequentist inference is/is not. Bayes is the analysis of subjective beliefs but provides no frequency guarantees. Frequentist inference is about making procedures that have frequency guarantees but makes no pretense of representing anyone’s beliefs. In the high dimensional world, you have to choose: objective frequency guarantees or subjective beliefs. Choose whichever you prefer, but you can’t have both. I don’t care which one people pick; I just wish they would be clear about what they are giving up when they make their choice.
In your blog, Deborah, you mentioned these papers
by Houman Owhadi, Clint Scovel and Tim Sullivan [Ed: See this post.] And then there is this paper
by Gordon Belot (“Failure of Calibration is Typical”).
These challenges to Bayesian inference remain unanswered in my opinion. In fact, I think Freedman’s Theorem (1965, Annals, p 454) still remains adequately unanswered.
Of course, one can embrace objective Bayesian inference. If this means “Bayesian procedures with good frequentist properties” then I am all for it. But this is just frequentist inference in Bayesian clothing. If it means something else, I’d like to know what.
*I had intended to post a Wasserman update, were I lucky enough to get one, after running the Gelman and Hennig deconstructions from last year, but since Wasserman has surprised me, I’m reversing the order. Two additional related posts are below. I invite the other discussants to share their current reflections and updates, whenever they wish. Normal Deviate is back!
Wasserman on Spanos: https://errorstatistics.com/2012/08/11/u-phil-wasserman-replies-to-spanos-and-hennig/
Wasserman (initial) response to Mayo’s deconstruction: https://errorstatistics.com/2012/08/13/u-phil-concluding-the-deconstruction-wasserman-mayo/
**True, it’s all work here in exile, aside from an occasional visit to the Elbar Room!
Normal Deviate: I’m so glad to have your frank update. It’s great, and I agree with all you say, although I’d extend the point to any dimensional models. My problems with so-called objective Bayesian inference are (a) the ones Kass and Wasserman (1996)* bring out for all “default” or conventional Bayesian priors, (b) the fact that I find them schizophrenic in their claimed rationales (we want to put in background information but we want uninformative priors, at least until you come up with your subjective priors and once you do, we’ll put them in, and (c) the frequentist guarantees aren’t the relevant ones (or the most relevant ones) for making inferences about the particular hypothesis or claim. Even when there’s a match of numbers, I don’t see that they supply the interpretation that’s wanted for scientific inference (i.e., in terms of how well-tested the particular claim is, by the method giving rise to the data). That’s different from screening.
They also encourage another fashionable hybrid: the posterior probabilities of “false positives”, based on presumed relative frequencies of true nulls, coupled with recipe-like “up-down: significance testing.
I very much hope that the Normal Deviate will consider a monthly post on the “error statistics philosophy” blog!
OK, I left in “bullshit” in the post rather than substitute with “B.S.”, so what’s going to happen to me? (I see “math babe” freely using 4 letter words on her blog.) Anyway, one does wonder if anyone takes Krugman seriously any more.