Monthly Archives: January 2012
No-Pain Philosophy: Skepticism, Rationality, Popper, and All That: First of 3 Parts
Updating & Downdating: One of the Pieces to Pick up on
Before moving on to a couple of rather different areas, there’s an issue that, while mentioned by both Senn and Gelman, did not come up for discussion; so let me just note it here as one of the pieces to pick up on later.
“It is hard to see what exactly a Bayesian statistician is doing when interacting with a client. There is an initial period in which the subjective beliefs of the client are established. These prior probabilities are taken to be valuable enough to be incorporated in subsequent calculation. However, in subsequent steps the client is not trusted to reason. The reasoning is carried out by the statistician. As an exercise in mathematics it is not superior to showing the client the data, eliciting a posterior distribution and then calculating the prior distribution; as an exercise in inference Bayesian updating does not appear to have greater claims than ‘downdating’ and indeed sometimes this point is made by Bayesians when discussing what their theory implies. (59)…..” Stephen Senn
“As I wrote in 2008, if you could really construct a subjective prior you believe in, why not just look at the data and write down your subjective posterior.” Andrew Gelman commenting on Senn
I’ve even heard subjective Bayesians concur on essentially this identical point, but I would think that many would take issue with it…no?
U-PHIL (3): Stephen Senn on Stephen Senn!
I am grateful to Deborah Mayo for having highlighted my recent piece. I am not sure that it deserves the attention it is receiving.Deborah has spotted a flaw in my discussion of pragmatic Bayesianism. In praising the use of background knowledge I can neither be talking about automatic Bayesianism nor about subjective Bayesianism. It is clear that background knowledge ought not generally to lead to uninformative priors (whatever they might be) and so is not really what objective Bayesianism is about. On the other hand all subjective Bayesians care about is coherence and it is easy to produce examples where Bayesians quite logically will react differently to evidence, so what exactly is ‘background knowledge’?. Continue reading
U-PHIL: Stephen Senn (2): Andrew Gelman
U-PHIL: Stephen Senn (1): C. Robert, A. Jaffe, and Mayo (brief remarks)
RMM-6: Special Volume on Stat Sci Meets Phil Sci
The article “The Renegade Subjectivist: José Bernardo’s Reference Bayesianism” by Jan Sprenger has now been published in our special volume of the on-line journal, Rationality, Markets, and Morals (Special Topic: Statistical Science and Philosophy of Science: Where Do/Should They Meet?)
Abstract: This article motivates and discusses José Bernardo’s attempt to reconcile the subjective Bayesian framework with a need for objective scientific inference, leading to a special kind of objective Bayesianism, namely reference Bayesianism. We elucidate principal ideas and foundational implications of Bernardo’s approach, with particular attention to the classical problem of testing a precise null hypothesis against an unspecified alternative.
"Philosophy of Statistics": Nelder on Lindley
Recently (Nelder,1999) I have argued that statistics should be called statistical science, and that probability theory should be called statistical mathematics (not mathematical statistics). I think that Professor Lindley’s paper should be called the philosophy of statistical mathematics, and within it there is little that I disagree with. However, my interest is in the philosophy of statistical science, which I regard as different. Statistical science is not just about the study of uncertainty but rather deals with inferences about scientific theories from uncertain data. Continue reading
Mayo Philosophizes on Stephen Senn: "How Can We Cultivate Senn’s-Ability?"
Although, in one sense, Senn’s remarks echo the passage of Jim Berger’s that we deconstructed a few weeks ago, Senn at the same time seems to reach an opposite conclusion. He points out how, in practice, people who claim to have carried out a (subjective) Bayesian analysis have actually done something very different—but that then they heap credit on the Bayesian ideal. (See also the blog post “Who Is Doing the Work?”) Continue reading
“You May Believe You Are a Bayesian But You Are Probably Wrong”
The following is an extract (58-63) from the contribution by
Stephen Senn (Full article)
Head of the Methodology and Statistics Group,
Competence Center for Methodology and Statistics (CCMS), Luxembourg
I am not arguing that the subjective Bayesian approach is not a good one to use. I am claiming instead that the argument is false that because some ideal form of this approach to reasoning seems excellent in theory it therefore follows that in practice using this and only this approach to reasoning is the right thing to do. A very standard form of argument I do object to is the one frequently encountered in many applied Bayesian papers where the first paragraphs lauds the Bayesian approach on various grounds, in particular its ability to synthesize all sources of information, and in the rest of the paper the authors assume that because they have used the Bayesian machinery of prior distributions and Bayes theorem they have therefore done a good analysis. It is this sort of author who believes that he or she is Bayesian but in practice is wrong. (58) Continue reading
U-PHIL: "So you want to do a philosophical analysis?"
PhilStatLaw: Bad-Faith Assertions of Conflicts of Interest?*
Don’t Birnbaumize that Experiment my Friend*
(A) “It is not uncommon to see statistics texts argue that in frequentist theory one is faced with the following dilemma: either to deny the appropriateness of conditioning on the precision of the tool chosen by the toss of a coin[i], or else to embrace the strong likelihood principle which entails that frequentist sampling distributions are irrelevant to inference once the data are obtained. This is a false dilemma … The ‘dilemma’ argument is therefore an illusion”. (Cox and Mayo 2010, p. 298)
Model Validation and the LP-(Long Playing Vinyl Record)
A Bayesian acquaintance writes:
Although the Birnbaum result is of primary importance for sampling theorists, I’m still interested in it because many Bayesian statisticians think that model checking violates the likelihood principle, as if this principle is a fundamental axiom of Bayesian statistics.
But this is puzzling for two reasons. First, if the LP does not preclude testing for assumptions (and he is right that it does not[i]), then why not simply explain that rather than appeal to a disproof of something that actually never precluded model testing? To take the disproof of the LP as grounds to announce: “So there! Now even Bayesians are free to test their models” would seem only to ingrain the original fallacy. Continue reading