In an exchange with an anonymous commentator, responding to my May 23 blog post, I was asked what I meant by an argument (in favor of a method) based on “painting-by-number” reconstructions. “Painting-by-numbers” refers to reconstructing an inference or application of method X (analogous to a method of painting) to make it consistent with an application of method Y (painting with a paint-by-number kit). The locution comes from EGEK (Mayo 1996) and alludes to a kind of argument sometimes used to garner “success stories” for a method: i.e., show that any case, given enough latitude, could be reconstructed so as to be an application of (or at least consistent with) the preferred method.
Referring to specific applications of error-statistical methods, I wrote in (EGEK, (pp. 100-101):
We may grant that experimental inferences, once complete, may be reconstructed so as to be seen as applications of Bayesian methods—even though that would be stretching it in many cases. My point is that the inferences actually made are applications of standard non-Bayesian methods [e.g., significance tests]. . . . The point may be made with an analogy. Imagine the following conversation:
Paint-by-number artist to Leonardo Da Vinci: I can show that the Mona Lisa may be seen as the result of following a certain paint-by-number kit that I can devise. Whether you know it or not you are really a painter by number.
Da Vinci: But you devised your paint-by-number Mona Lisa only by starting with my painting, and I assure you I did not create it by means of a paint-by-number algorithm. Your ability to do this in no way shows that the paint-by-number method is a good way to produce new art. If I were required to have a paint-by-number algorithm before beginning to paint, I would not have arrived at my beautiful Mona Lisa.
If the argument doesn’t hold up for painting-by-numbers, I allege, it does not hold up for reconstructing methods.
This is, by the way, an application of the one and only rule I gave for this blog, back at the start (Sept. 4, 2011). I called it drilling rule #1
If one argument is precisely analogous to another, in all relevant respects, and the second argument is pretty clearly fishy, then so is the first. Likewise, if one argument is precisely analogous to another, in all relevant respects, and the second argument passes swimmingly, then so must the first.
(More specifically, the commentator suggests that any error-statistical “supplement” I might think was needed could in fact be interpreted so as to already be part of the Bayesian diet. But the reconstruction would not have weight if it were merely like painting-by-numbers.)
From this analogy, I deny the commentator’s suggestion that in order to show “that the philosophical differences between paradigms matter and merit investigation, one has to give real-world examples where useful methods cannot be viewed as e.g. Bayesian.” Reconstructions are too readily available; yet we see the relevance of philosophical issues, both within and between statistical philosophies. The reader may find examples offered by contributors in RMM’s special topic on “Statistical Science and Philosophy of Science” and in this blog.
At the same time, I readily admit RMM paper:
I would never be so bold as to suggest that a lack of clarity about philosophical foundations in any way hampers progress in statistical practice. Only in certain moments do practitioners need a philosophical or self-reflective standpoint. Yet those moments, I maintain, are increasingly common.
I go on to note:
Even though statistical science (as with other sciences) generally goes about its business without attending to its own foundations, implicit in every statistical methodology are core ideas that direct its principles, methods, and interpretations. I will call this its statistical philosophy. Yet the same statistical method may and usually does admit of more than one statistical philosophy. When faced with new types of problems or puzzling cases, or when disagreement between accounts arises, there is a need to scrutinize the underlying statistical philosophies. Too often the associated statistical philosophies remain hidden in such foundational debates, in the very place we most need to see them revealed. (p. 81)
Thank you for the clarification. However, as I did write in the earlier post(s), I was not referring to “an inference or application”, but to _all_ applications, i.e. all datasets, modulo asymptotic approximations. The comments above don’t seem to address this distinction (perhaps we mean something different by “method”?) and in particular your charge that “reconstructions are too readily available” is not an accurate reflection of the statistical literature.
Also:
* I don’t have any problem with the drilling rule, and I don’t think anything I wrote contradicts this.
* I did not suggest that “that any error-statistical “supplement” [Mayo] might think was needed could in fact be interpreted so as to already be part of the Bayesian diet”. I said that this _might_ be possible, and also that examples where one could prove that no such interpretations exist would be both interesting and useful.
Finally:
* The motivating “merit and matter” comments, that seem to have motivated this (I didn’t ask, contrary to your opening line) were advice on convincing statisticians to pay attention to philosophical differences. If you don’t want to reach that audience, fine. If you do, real-world examples, in addition to philosophical scrutiny, would help a lot.
Honored Guest:
G:I was not referring to “an inference or application”, but to _all_ applications, i.e. all datasets, modulo asymptotic approximations.
Mayo: Don’t really get this, in relation to the issue, sorry.
G:If you don’t want to reach that audience, fine. If you do, real-world examples, in addition to philosophical scrutiny, would help a lot.
Mayo: I think what’s quite interesting in many of the issues raised in foundations of statistics nowadays—as represented, just for an example, in several of the RMM papers, is that they grow out of a concern in practice as to whether certain methods are kosher (e.g.,for Bayesian to test models, e.g., Gelman), as opposed to some kind of pure philosophy. I guess my feeling is, as I’ve said many times, philosophical foundations concern special features that would not be of interest to most practitioners; those who can read these kinds of papers and either not see or not care about the philosophical issues, shouldn’t be the slightest bit bothered. On the other hand, I think philosophers of inductive-statistical inference, Bayesian epistemologists, and the like, should care, and they should be much more immersed in actual examples and real-live foundational issues that statisticians are discussing (please see first sentence of my RMM paper)—as they used to be 25+ years ago, ironically). Statisticians are taking the lead these days, much more so than philosophers! More on this later.
Other clarifications: I did feel I was already going on too long with various qualifications as regards the possible meanings of the “guest” commentator, so I finally left off, figuring people could readily look up the exact words.
In the midst of travels now, can’t comment as much as I’d like, but thanks for the interesting remarks.
I don’t dispute the invalidity of an argument that uses a reconstruction of method X in terms of method Y to subsume method X’s success. However, I find it unfortunate that the somewhat pejorative phrase “paint-by-numbers” has been attached to the process of reconstruction itself rather than to the subsumption argument. A valid use of reconstruction is simply to understand method X in terms appropriate to method Y; I have advocated reconstruction on this basis here.
Corey: I don’t think that I’ve attached a perjorative label to all of reconstructing by any means; it was brought up in connection with the particular challenge. I agree to trying to illuminate a method or principle by showing how it might further a given goal (and that reconstructing can serve that purpose). Still, I had thought a lot about a way to make my point (back in writing EGEK), and eventually, hit upon painting-by-numbers because it shares so many features with what I had in mind (perhaps, in part, because I am an ex-painter).
Mayo, Corey; I read it as pejorative as well.
I scarcely think that a humorous reference in a single book to illuminate a very particular gambit we sometimes see for making a very particular point, counts as having attached to all reconstructions a pejorative label. As always, in my work, one has to think about what’s actually being argued, and what’s being said, in the case at hand, no overly broad brushstrokes.
I’m happy to accept this statement of authorial intent. I’m willing to defend my original interpretation of the text of the post as reasonable (albeit wrong) if anyone wants to have that discussion.
I actually think the paint-by-numbers analogy is pretty apt and gets directly to the point Mayo is trying to make: that it is a long stretch to claim that you ARE a Y just because I can reconstruct what you’ve done after the fact as if it was done by Y. This is a common move in STS and philosophy: to claim that even though an episode used method X to achieve some goal G, that because I can reconstruct it using another (and often foundationally incompatible) method Y, that method Y REALLY explains the results (achieves desired goal G). Common examples here of misplaced credit are often seen in social constuctivist or relativist explanations of scientific episodes. In many of their reconstructions, sociological factors (e.g., social negotiations) rather than epistemological factors (e.g., (in)correct statistical analyses, (in)adequate randomization) are given, if not sole, then a preponderance of credit for closing debate on experimental claims or for adopting/rejecting experimental techniques. A good example is Harry Collins (2004) book, Gravity’s Shadow, on Joseph Weber’s claim to have detected gravity waves using his gravity bar detectors and the ensuing scientific debates about his claims. While Collins’ project is to show that social reasons were in the main responsible for rejecting Weber’s claims that he had detected gravity waves, if one carefully reads Collins’ book, especially his footnotes, one finds an almost overwhelming amount of non-social “evidence” given for closing debate, including Weber’s use of incorrect statistical analyses; also that evidence based on simultaneous readings between labs were actually not simultaneous due to the labs being in different time zones (so detecting a gravitational wave at 1100 at the two different sites, was not simultaneous but instead occurred an hour apart); and so on. Collins has taken his box of 64 sociological crayons and colored-in the entire episode in social hues. However the fact that social factors can be used to explain this episode, is not enough to show that they should be used to explain it, by which I mean it doesn’t show that they are the responsible factors behind rejecting Weber’s claims.(1)
I think the Bayesian move to reconstruct standard NP statistics in a Bayesian (dis)guise—e.g., attaching posterior probabilities to hypotheses, which frequentists would never do (assign to hypotheses that is) is equally fallacious. (This is not to claim there are no merits to the Bayesian approach, I am only talking about reconstructing frequentists methods as Bayesian ones here.)
(1)Although I disagree with his interpretation (see chapter 6 of my dissertation, Naturalism and Objectivity: Methods and Meta-methods at http://www.lib.vt.edu), Collins’ book (published by University of Chicago Press) is truly excellent in providing all the pertinent experimental details as well as the sociological ones in the history of Joseph Weber’s gravity experiments.)
Jean: Thanks for this, I love the 64 sociological crayons, but I thought they would have bought that large 128 Crayola box1
Where did you get the Mona Lisa image – its just what I need for a presentation and would like permission to use it. Happy to reference the source. OK?
It’s not mine actually; most likely I modified something from on-line.
Nice work then, thanks for letting me know!