Posts Tagged With: Bayesian foundations in shambles

Dennis Lindley’s “Philosophy of Statistics”

Philosopher’s Stone

Yesterday’s slight detour [i] presents an opportunity to (re)read Lindley’s “Philosophy of Statistics” (2000) (see also an earlier post).  I recommend the full article and discussion. There is actually much here on which we agree.

The Philosophy of Statistics

Dennis V. Lindley

The Statistician (2000) 49:293-319

Summary. This paper puts forward an overall view of statistics. It is argued that statistics is the study of uncertainty. The many demonstrations that uncertainties can only combine according to the rules of the probability calculus are summarized. The conclusion is that statistical inference is firmly based on probability alone. Progress is therefore dependent on the construction of a probability model; methods for doing this are considered. It is argued that the probabilities are personal. The roles of likelihood and exchangeability are explained. Inference is only of value if it can be used, so the extension to decision analysis, incorporating utility, is related to risk and to the use of statistics in science and law. The paper has been written in the hope that it will be intelligible to all who are interested in statistics.

Around eight pages in we get another useful summary:

Let us summarize the position reached.

(a)   Statistics is the study of uncertainty.

(b)    Uncertainty should be measured by probability.

(c)   Data uncertainty is so measured, conditional on the parameters.

(d)  Parameter uncertainty is similarly measured by probability.

(e)    Inference is performed within the probability calculus, mainly by equations (1) and (2) (301).

Continue reading

Categories: Statistics | Tags: , , , | 50 Comments

The Error Statistical Philosophy and The Practice of Bayesian Statistics: Comments on Gelman and Shalizi

Mayo elbowThe following is my commentary on a paper by Gelman and Shalizi, forthcoming (some time in 2013) in the British Journal of Mathematical and Statistical Psychology* (submitted February 14, 2012).
_______________________

The Error Statistical Philosophy and the Practice of Bayesian Statistics: Comments on A. Gelman and C. Shalizi: Philosophy and the Practice of Bayesian Statistics”**
Deborah G. Mayo

  1. Introduction

I am pleased to have the opportunity to comment on this interesting and provocative paper. I shall begin by citing three points at which the authors happily depart from existing work on statistical foundations.

First, there is the authors’ recognition that methodology is ineluctably bound up with philosophy. If nothing else “strictures derived from philosophy can inhibit research progress” (p. 4). They note, for example, the reluctance of some Bayesians to test their models because of their belief that “Bayesian models were by definition subjective,” or perhaps because checking involves non-Bayesian methods (4, n4).

Second, they recognize that Bayesian methods need a new foundation. Although the subjective Bayesian philosophy, “strongly influenced by Savage (1954), is widespread and influential in the philosophy of science (especially in the form of Bayesian confirmation theory),”and while many practitioners perceive the “rising use of Bayesian methods in applied statistical work,” (2) as supporting this Bayesian philosophy, the authors flatly declare that “most of the standard philosophy of Bayes is wrong” (2 n2). Despite their qualification that “a statistical method can be useful even if its philosophical justification is in error”, their stance will rightly challenge many a Bayesian.

Continue reading

Categories: Statistics | Tags: , , , , | Leave a comment

Painting-by-Number #1

In an exchange with an anonymous commentator, responding to my May 23 blog post, I was asked what I meant by an argument (in favor of a method) based on “painting-by-number” reconstructions. “Painting-by-numbers” refers to reconstructing an inference or application of method X (analogous to a method of painting) to make it consistent with an application of method Y (painting with a paint-by-number kit). The locution comes from EGEK (Mayo 1996) and alludes to a kind of argument sometimes used to garner “success stories” for a method: i.e., show that any case, given enough latitude, could be reconstructed so as to be an application of (or at least consistent with) the preferred method.

Referring to specific applications of error-statistical methods, I wrote in (EGEK, (pp. 100-101):

We may grant that experimental inferences, once complete, may be reconstructed so as to be seen as applications of Bayesian methods—even though that would be stretching it in many cases. My point is that the inferences actually made are applications of standard non-Bayesian methods [e.g., significance tests]. . . . The point may be made with an analogy. Imagine the following conversation: Continue reading

Categories: Statistics | Tags: , , , | 12 Comments

Jean Miller: Happy Sweet 16 to EGEK! (Shalizi Review: “We Have Ways of Making You Talk”)

Jean Miller here.  (I obtained my PhD with D. Mayo in Phil/STS at VT.) Some of us “island philosophers” have been looking to pick our favorite book reviews of EGEK (Mayo 1996; Lakatos Prize 1999) to celebrate its “sweet sixteen” this month. This review, by Dr. Cosma Shalizi (CMU, Stat) has been chosen as the top favorite (in the category of reviews outside philosophy).  Below are some excerpts–it was hard to pick, as each paragraph held some new surprise, or unique way to succinctly nail down the views in EGEK. You can read the full review here. Enjoy.

“We Have Ways of Making You Talk, or, Long Live Peircism-Popperism-Neyman-Pearson Thought!”
by Cosma Shalizi

After I’d bungled teaching it enough times to have an idea of what I was doing, one of the first things students in my introductory physics classes learned (or anyway were taught), and which I kept hammering at all semester, was error analysis: estimating the uncertainty in measurements, propagating errors from measured quantities into calculated ones, and some very quick and dirty significance tests, tests for whether or not two numbers agree, within their associated margins of error. I did this for purely pragmatic reasons: it seemed like one of the most useful things we were supposed to teach, and also one of the few areas where what I did had any discernible effect on what they learnt. Now that I’ve read Mayo’s book, I’ll be able to offer another excuse to my students the next time I teach error analysis, namely, that it’s how science really works.

I exaggerate her conclusion slightly, but only slightly. Mayo is a dues-paying philosopher of science (literally, it seems), and like most of the breed these days is largely concerned with questions of method and justification, of “ampliative inference” (C. S. Peirce) or “non-demonstrative inference” (Bertrand Russell). Put bluntly and concretely: why, since neither can be deduced rigorously from unquestionable premises, should we put more trust in David Grinspoon‘s ideas about Venus than in those of Immanuel Velikovsky? A nice answer would be something like, “because good scientific theories are arrived at by employing thus-and-such a method, which infallibly leads to the truth, for the following self-evident reasons.” A nice answer, but not one which is seriously entertained by anyone these days, apart from some professors of sociology and literature moonlighting in the construction of straw men. In the real world, science is alas fallible, subject to constant correction, and very messy. Still, mess and all, we somehow or other come up with reliable, codified knowledge about the world, and it would be nice to know how the trick is turned: not only would it satisfy curiosity (“the most agreeable of all vices” — Nietzsche), and help silence such people as do, in fact, prefer Velikovsky to Grinspoon, but it might lead us to better ways of turning the trick. Asking scientists themselves is nearly useless: you’ll almost certainly just get a recital of whichever school of methodology we happened to blunder into in college, or impatience at asking silly questions and keeping us from the lab. If this vice is to be indulged in, someone other than scientists will have to do it: namely, the methodologists. Continue reading

Categories: philosophy of science, Statistics | Tags: , , , , | 33 Comments

Mayo Philosophizes on Stephen Senn: "How Can We Cultivate Senn’s-Ability?"

Where’s Mayo?

Although, in one sense, Senn’s remarks echo the passage of Jim Berger’s that we deconstructed a few weeks ago, Senn at the same time seems to reach an opposite conclusion. He points out how, in practice, people who claim to have carried out a (subjective) Bayesian analysis have actually done something very different—but that then they heap credit on the Bayesian ideal. (See also the blog post “Who Is Doing the Work?”) Continue reading

Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , , , | 7 Comments

JIM BERGER ON JIM BERGER!

Fortunately, we have Jim Berger interpreting himself this evening (see December 11)

Jim Berger writes: 

A few comments:

1. Objective Bayesian priors are often improper (i.e., have infinite total mass), but this is not a problem when they are developed correctly. But not every improper prior is satisfactory. For instance, the constant prior is known to be unsatisfactory in many situations. The ‘solution’ pseudo-Bayesians often use is to choose a constant prior over a large but bounded set (a ‘weakly informative’ prior), saying it is now proper and so all is well. This is not true; if the constant prior on the whole parameter space is bad, so will be the constant prior over the bounded set. The problem is, in part, that some people confuse proper priors with subjective priors and, having learned that true subjective priors are fine, incorrectly presume that weakly informative proper priors are fine. Continue reading

Categories: Irony and Bad Faith, Statistics, U-Phil | Tags: , , , | 13 Comments

Who is Really Doing the Work?*

A common assertion (of which I was reminded in Leiden*) is that in scientific practice, by and large, the frequentist sampling theorist (error statistician) ends up in essentially the “same place” as Bayesians, as if to downplay the importance of disagreements within the Bayesian family, let alone between the Bayesian and frequentist.   Such an utterance, in my experience, is indicative of a frequentist in exile (as described on this blog). [1]  Perhaps the claim makes the frequentist feel less in exile; but it also renders any subsequent claims to prefer the frequentist philosophy as just that—a matter of preference, without a pressing foundational imperative. Yet, even if one were to grant an agreement in numbers, it is altogether crucial to ascertain who or what is really doing the work.  If we don’t understand what is really responsible for success stories in statistical inference, we cannot hope to improve those methods, adjudicate rival assessments when they do arise, or get ideas for extending and developing tools when entering brand new arenas.  Clearly, understanding the underlying foundations of one or another approach is crucial for a philosopher of statistics, but practitioners too should care, at least some of the time. Continue reading

Categories: Statistics | Tags: | Leave a comment

Blog at WordPress.com.