In an exchange with an anonymous commentator, responding to my May 23 blog post, I was asked what I meant by an argument (in favor of a method) based on “painting-by-number” reconstructions. “Painting-by-numbers” refers to reconstructing an inference or application of method X (analogous to a method of painting) to make it consistent with an application of method Y (painting with a paint-by-number kit). The locution comes from EGEK (Mayo 1996) and alludes to a kind of argument sometimes used to garner “success stories” for a method: i.e., show that any case, given enough latitude, could be reconstructed so as to be an application of (or at least consistent with) the preferred method.
Referring to specific applications of error-statistical methods, I wrote in (EGEK, (pp. 100-101):
We may grant that experimental inferences, once complete, may be reconstructed so as to be seen as applications of Bayesian methods—even though that would be stretching it in many cases. My point is that the inferences actually made are applications of standard non-Bayesian methods [e.g., significance tests]. . . . The point may be made with an analogy. Imagine the following conversation:
Paint-by-number artist to Leonardo Da Vinci: I can show that the Mona Lisa may be seen as the result of following a certain paint-by-number kit that I can devise. Whether you know it or not you are really a painter by number.
Da Vinci: But you devised your paint-by-number Mona Lisa only by starting with my painting, and I assure you I did not create it by means of a paint-by-number algorithm. Your ability to do this in no way shows that the paint-by-number method is a good way to produce new art. If I were required to have a paint-by-number algorithm before beginning to paint, I would not have arrived at my beautiful Mona Lisa.
If the argument doesn’t hold up for painting-by-numbers, I allege, it does not hold up for reconstructing methods.
This is, by the way, an application of the one and only rule I gave for this blog, back at the start (Sept. 4, 2011). I called it drilling rule #1
If one argument is precisely analogous to another, in all relevant respects, and the second argument is pretty clearly fishy, then so is the first. Likewise, if one argument is precisely analogous to another, in all relevant respects, and the second argument passes swimmingly, then so must the first.
(More specifically, the commentator suggests that any error-statistical “supplement” I might think was needed could in fact be interpreted so as to already be part of the Bayesian diet. But the reconstruction would not have weight if it were merely like painting-by-numbers.)
From this analogy, I deny the commentator’s suggestion that in order to show “that the philosophical differences between paradigms matter and merit investigation, one has to give real-world examples where useful methods cannot be viewed as e.g. Bayesian.” Reconstructions are too readily available; yet we see the relevance of philosophical issues, both within and between statistical philosophies. The reader may find examples offered by contributors in RMM’s special topic on “Statistical Science and Philosophy of Science” and in this blog.
At the same time, I readily admit RMM paper:
I would never be so bold as to suggest that a lack of clarity about philosophical foundations in any way hampers progress in statistical practice. Only in certain moments do practitioners need a philosophical or self-reflective standpoint. Yet those moments, I maintain, are increasingly common.
I go on to note:
Even though statistical science (as with other sciences) generally goes about its business without attending to its own foundations, implicit in every statistical methodology are core ideas that direct its principles, methods, and interpretations. I will call this its statistical philosophy. Yet the same statistical method may and usually does admit of more than one statistical philosophy. When faced with new types of problems or puzzling cases, or when disagreement between accounts arises, there is a need to scrutinize the underlying statistical philosophies. Too often the associated statistical philosophies remain hidden in such foundational debates, in the very place we most need to see them revealed. (p. 81)
Thank you for the clarification. However, as I did write in the earlier post(s), I was not referring to “an inference or application”, but to _all_ applications, i.e. all datasets, modulo asymptotic approximations. The comments above don’t seem to address this distinction (perhaps we mean something different by “method”?) and in particular your charge that “reconstructions are too readily available” is not an accurate reflection of the statistical literature.
* I don’t have any problem with the drilling rule, and I don’t think anything I wrote contradicts this.
* I did not suggest that “that any error-statistical “supplement” [Mayo] might think was needed could in fact be interpreted so as to already be part of the Bayesian diet”. I said that this _might_ be possible, and also that examples where one could prove that no such interpretations exist would be both interesting and useful.
* The motivating “merit and matter” comments, that seem to have motivated this (I didn’t ask, contrary to your opening line) were advice on convincing statisticians to pay attention to philosophical differences. If you don’t want to reach that audience, fine. If you do, real-world examples, in addition to philosophical scrutiny, would help a lot.