A reader calls my attention to Andrew Gelman’s blog announcing a talk that he’s giving today in French: “Philosophie et practique de la statistique bayésienne”. He blogs:
I’ll try to update the slides a bit since a few years ago, to add some thoughts I’ve had recently about problems with noninformative priors, even in simple settings.
The location of the talk will not be convenient for most of you, but anyone who comes to the trouble of showing up will have the opportunity to laugh at my accent.
P.S. For those of you who are interested in the topic but can’t make it to the talk, I recommend these two papers on my non-inductive Bayesian philosophy:
 Philosophy and the practice of Bayesian statistics (with discussion). British Journal of Mathematical and Statistical Psychology, 8–18. (Andrew Gelman and Cosma Shalizi)  Rejoinder to discussion. (Andrew Gelman and Cosma Shalizi)
 Induction and deduction in Bayesian data analysis. Rationality, Markets and Morals}, special topic issue “Statistical Science and Philosophy of Science: Where Do (Should) They Meet In 2011 and Beyond?” (Andrew Gelman)
These papers, especially Gelman (2011), are discussed on this blog (in “U-Phils”). Comments by Senn, Wasserman, and Hennig may be found here, and here,with a response here (please use search for more).
As I say in my comments on Gelman and Shalizi, I think Gelman’s position is (or intends to be) inductive– in the sense of being ampliative (going beyond the data)– but simply not probabilist, i.e., not a matter of updating priors. (A blog post is here)[i]. Here’s a snippet from my comments:
Although the subjective Bayesian philosophy, “strongly influenced by Savage (1954), is widespread and influential in the philosophy of science (especially in the form of Bayesian confirmation theory),” and while many practitioners perceive the “rising use of Bayesian methods in applied statistical work,” (2) as supporting this Bayesian philosophy, the authors [Gelman and Shalizi] flatly declare that “most of the standard philosophy of Bayes is wrong” (2 n2). Despite their qualification that “a statistical method can be useful even if its philosophical justification is in error”, their stance will rightly challenge many a Bayesian.
This will be especially so when one has reached their third thesis, which seeks a new foundation that uses non-Bayesian ideas. Although the authors at first profess that their “perspective is not new”, but rather follows many other statisticians who emphasize “the value of Bayesian inference as an approach for obtaining statistical methods with good frequency properties,” (3), they go on to announce they are “going beyond the evaluation of Bayesian methods based on their frequency properties as recommended by Rubin (1984), Wasserman (2006), among others, to emphasize the learning that comes from the discovery of systematic differences between model and data” (15). Moreover, they suggest that “implicit in the best Bayesian practice is a stance that has much in common with the error-statistical approach of Mayo (1996), despite the latter’s frequentist orientation.[i] Indeed, crucial parts of Bayesian data analysis, such as model checking, can be understood as ‘error probes’ in Mayo’s sense”(2), which might be seen as using modern statistics to implement the Popperian criteria for severe tests.
The authors claim their statistical analysis is used “not for computing the posterior probability that any particular model was true—we never actually did that” (8), but rather “to fit rich enough models” and upon discerning that aspects of the model “did not fit our data” (8), to build a more complex, better fitting, model; which in turn called for alteration when faced with new data.
This cycle, they rightly note, involves a “non-Bayesian checking of Bayesian models” (11), but they should not describe it as purely deductive: it is not. Nor should they wish to hold to that old distorted view of a Popperian test as “the rule of deduction which says that if p implies q, and q is false, then p must be false” (with p, q, the hypothesis, and data respectively) (22). Having thrown off one oversimplified picture, they should avoid slipping into another.
My full comments are here.
[i] Some might view such a “non-inductive Bayesian philosophy” as an “inductive non-Bayesian philosophy”. Gelman is likely to scream at this, peut etre en francais. I’ve forgotten what little I knew of French.
Some related papers:
Gelman, A and C. Shalizi. (Article first published online: 24 FEB 2012). “Philosophy and the Practice of Bayesian statistics (with discussion)”. British Journal of Mathematical and Statistical Psychology (BJMSP).
Mayo, D., and D. Cox. 2006. “Frequentist Statistics as a Theory of Inductive Inference”. In Optimality: The Second Erich L. Lehmann Symposium, edited by J. Rojo, 77–97. Vol. 49, Lecture Notes-Monograph Series, Institute of Mathematical Statistics (IMS). Reprinted in D.Mayo and A. Spanos, 2010: 247–275.
Mayo, D., and A. Spanos. 2011. Error Statistics. In Philosophy of Statistics, edited by P. S. Bandyopadhyay and M. R. Forster. Handbook of the Philosophy of Science. Oxford: Elsevier.
Senn, S. Comment on Gelman and Shalizi (pages 65–67) .
Wasserman, L. 2006. “Frequentist Bayes is Objective”. Bayesian Analysis 1(3):451-456. URL http://ba.stat.cmu.edu/journal/2006/vol01/issue03/wasserman.pdf.
What you are saying seems to imply you can be an error statistician without being a frequentist.
I can see how you can compute a p-value conditional on an exchangeable model.
At first glance it seems more problematic to use concepts such as coverage conditional on an exchangeable model (instead of the usual i.i.d)….
David: I’m not sure I understand your points, even about being a “frequentist”.