When any scientific conclusion is supposed to be [shown or disproved] on experimental evidence [or data], critics who still refuse to accept the conclusion are accustomed to take one of two lines of attack. They may claim that the interpretation of the [data] is faulty, that the results reported are not in fact those which should have been expected had the conclusion drawn been justified, or that they might equally well have arisen had the conclusion drawn been false. Such criticisms of interpretation are usually treated as falling within the domain of statistics. They are often made by professed statisticians against the work of others whom they regard as ignorant of or incompetent in statistical technique; and, since the interpretation of any considerable body of data is likely to involve computations it is natural enough that questions involving the logical implications of the results of the arithmetical processes implied should be relegated to the statistician. At least I make no complaint of this convention. The statistician cannot evade the responsibility for understanding the processes he applies or recommends. My immediate point is that the questions involved can be dissociated from all that is strictly technical in the statistician’s craft…..
The other type of criticism to which experimental results [or data] are exposed is that the experiment itself was ill designed or, of course, badly executed….This type of criticism is usually made by what I might call a heavyweight authority. Prolonged experience, or at least the long possession of a scientific reputation, is almost a pre-requisite for developing successfully this line of attack. Technical details are seldom in evidence. The authoritative assertion “His controls are totally inadequate” must have temporarily discredited many a promising line of work; and such an authoritarian method of judgment must surely continue, human nature being what it is, so long as [general methods for data generation, modeling and analysis] are lacking…
[T]he subject matter [of this work] has been regarded from the point of view of an experimenter [or data analyst], who wishes to carry out his work competently, and having done so wishes to safeguard his results, so far as they are validly established, from ignorant criticism by different sorts of superior persons.
Seriously?
R.A.Fisher. _The Design of Experiments_
Kepler: Did you really know that right off? And there wasn’t even a prize attached.
Indeed, it’s the first two pages of the introductory chapter of that book – I just checked now (via an electronic copy of the book).
Nicole. Yes, now we might send them to Nate Silver…
A perfect present for NS in honor of Fisher’s birthday on Feb 17.
I really wonder how much of the common misinterpretation of Fisher is due to his ideas on experimental design (and the generation and interpretation of data) being read separately from statistical inference. He explicitly refers to these areas as requiring background information going beyond technical statistics, and yet some people (heavyweights), apparently unaware of this,castigate him for heralding mechanical, unthinking, even “immaculate” procedures (whatever that can mean). It’s quite ironic…
I particularly enjoyed this line: “The statistician cannot evade the responsibility for understanding the processes he applies or recommends.” Fisher clearly appreciated the need to understand the “whys” as well as the “hows” of statistical methods–the opposite of a mechanical approach..Sounds like a good motto for this blog–and for the role of philosophers and others looking at the foundations of statistics,
A short story on the theme of ‘heavyweight authority’:
I was at a workshop a few years back where a heavyweight mathematician was selling L1-regularized optimization as the solution for pretty much everything under the sun. He’s extremely talented and was trumpeting a tool that’s very useful. There was more than a little ego on display but it was justified so I was okay with it. What I wasn’t okay with was when he belittled a previous speaker for using least-squared-based optimization when, of course, she should have been using Split Bregman. So I piped up, “Well, if you’d bothered to look at the data you’d have seen that L2 was entirely appropriate.” which it was. He was a little taken aback. I imagine he didn’t have ‘nobodies’ challenging him in public very often let alone reprimanding him for not paying attention to the data. Anyhow, I was more than a little pleased with myself for blowing the whistle on him and even more so that a few people came up afterwards and said, “Glad you did that.” Speak truth to power, baby. That’s the only way to live.
Right on Chris!