*A friend from Elba surprised me by sending the interesting paper and discussion of Dennis Lindley (2000), “The Philosophy of Statistics,” which I hadn’t seen in years. She suggested, as especially apt, J. Nelder’s remarks; I recommend the full article and discussion:*

Recently (Nelder,1999) I have argued that statistics should be called statistical science, and that probability theory should be called statistical mathematics (not mathematical statistics). I think that Professor Lindley’s paper should be called the philosophy of statistical mathematics, and within it there is little that I disagree with. However, my interest is in the philosophy of statistical science, which I regard as different. Statistical science is not just about the study of uncertainty but rather deals with inferences about scientific theories from uncertain data. An important quality about theories is that they are essentially open ended; at anytime someone may come along and produce a new theory outside the current set. This contrasts with probability where to calculate a specific probability it is necessary to have a bounded universe of possibilities over which the probabilities are defined. When there is intrinsic open-endedness it is not enough to have a residual class of all the theories that I have not thought of yet. The best that we can do is to express relative likelihoods of different parameter values, without any implication that one of them is true. Although Lindley stresses that probabilities are conditional I do not think that this copes with the open-endedness problem.

I follow Fisher in distinguishing between inferences about specific events, such as that it will rain here tomorrow and inferences about theories. .…

General ideas like exchangeability and coherence are fine in themselves, but problems arise when we try to apply them to data from the real world. In particular when combining information from several data sets we can assume exchangeability, but the data themselves may strongly suggest that this assumption is not true. Similarly we can be coherent and wrong, because the world is not as assumed by Lindley. I find the procedures of scientific inference to be more complex than those defined in the paper. These latter fall into the class of ‘wouldn’t it be nice if’, i.e. would it not be nice if the philosophy of statistical mathematics sufficed for scientific inference. I do not think that it does. (325)

- Lindley, D. V. (2000), “The Philosophy of Statistics,”
*Journal of the Royal Statistical Society,*Series D (*The Statistician*), Vol. 49, No. 3, 293-337 - Nelder, J.A. (2000), Commentary on “The Philosophy of Statistics,”
*Journal of the Royal Statistical Society,*Series D (*The Statistician*), Vol. 49, No. 3, 324-5. - Nelder, J.A. (1999) “From Statistics to Statistical Science”
*Statistician*48, 257-267.

I think Nelder makes a very important distinction between what we might do to organize our personal beliefs (ponder our personal priors) versus what type of data analysis is suitable for presentation to others. Others are interested in what a proper experiment or observational study has to say.

What I found most interesting was his use of “statistical science” for developing and appraising scientific theories. I’m sure many will consider the recommendation to “leave out the subjective” oversimple (even though that’s not quite what he meant), but his emphasis on the open nature required for useful scientific theorizing is deeper and more significant I think.

My takeaways include 1. The simple truth is that one cannot legitimately assign a probability to a theory because the probabilities of all competing theories must sum to 1, and we cannot identify all of the theories, and 2. That statistical science is about probing aspects of theories, which is a complex (and messy, perhaps) endeavor. It cannot fit very nicely in a tidy little package (or model).

True. What baffles me is why so many seen stuck in a kind of “statisticism” wherein it is imagined that all of the complex moves from the planning, collecting, modeling and drawing inferences from data must be formalized within a probability computation. Pearson said that he and Neyman were always thinking of contexts wherein the planning was closely connected to interpretation, and several piecemeal tests were envisioned, with lots of emphasis on how to model the phenomenon. (I think Fisher said it was an accident that experiment design was developed separately from inference.) Even where one gets to the “inferential” part, probability seems to get the logic wrong, at least for the appraisals that interest me, e.g., how good a job did this research do at probing and ruling out errors that could render it mistaken to infer some claim h. It might provide lousy grounds for both h and its denial. It’s a different mindset or task. But it is more general so as to subsume the special cases wherein the “inference” properly takes the form of a probability assignment to events. Some kind of “reconciliation” might be found here, but not until people are pried away from the statisticism standpoint.