I continue a week of Fisherian posts in honor of his birthday (Feb 17). This is his contribution to the “Triad”–an exchange between Fisher, Neyman and Pearson 20 years after the Fisher-Neyman break-up. They are each very short.

*“Statistical Methods and Scientific Induction”*

*by Sir Ronald Fisher (1955)
*

**SUMMARY**

The attempt to reinterpret the common tests of significance used in scientific research as though they constituted some kind of acceptance procedure and led to “decisions” in Wald’s sense, originated in several misapprehensions and has led, apparently, to several more.

The three phrases examined here, with a view to elucidating they fallacies they embody, are:

- “Repeated sampling from the same population”,
- Errors of the “second kind”,
- “Inductive behavior”.

Mathematicians without personal contact with the Natural Sciences have often been misled by such phrases. The errors to which they lead are not only numerical.

To continue reading Fisher’s paper.

The most noteworthy feature is Fisher’s position on Fiducial inference, typically downplayed. I’m placing a summary and link to Neyman’s response below–it’s that interesting.

**“Note on an Article by Sir Ronald Fisher“**

**by Jerzy Neyman (1956)**

**Summary**

(1) FISHER’S allegation that, contrary to some passages in the introduction and on the cover of the book by Wald, this book does not really deal with experimental design is unfounded. In actual fact, the book is permeated with problems of experimentation. (2) Without consideration of hypotheses alternative to the one under test and without the study of probabilities of the two kinds, no purely probabilistic theory of tests is possible. (3) The conceptual fallacy of the notion of fiducial distribution rests upon the lack of recognition that valid probability statements about random variables usually cease to be valid if the random variables are replaced by their particular values. The notorious multitude of “paradoxes” of fiducial theory is a consequence of this oversight. (4) The idea of a “cost function for faulty judgments” appears to be due to Laplace, followed by Gauss.

Most of the themes are very well known, so I mention only a lesser known point. Fisher(1955) is criticizing Neyman and Pearson’s 1933 paper as having called his work an example of “inductive behavior”. I had missed this. So N-P really were, way back in 33, trying to describe and defend what Fisher seemed to be up to in saying things like: “so we may take it that there’s no effect” (when a null isn’t rejected). And what better way to make sense of Fisher’s talk of fiducial probability giving the proportion of cases in which an (interval) estimation method is right in the aggregate.

The other noteworthy and surprising thing, is that Fisher is still adhering to the idea that probabilistic instantiation is a legitimate deductive move, and castigating Neyman for not seeing this. This is like 20 years after the fiducial argument was being puzzled over, if not refuted. This bothers me, because it makes me question some of Fisher’s best insights. It’s extremely noteworthy, as well, that Neyman is still having trouble explaining what goes wrong with such an instantiation.

People are reluctant to get into the fiducial business in interpreting the Neyman-Fisher dispute all those years, but I’ve realized in the past couple of years that this is a big mistake. Lehmann, for example, says we can discuss Fisher& Neyman without getting into that, but the arguments between them are highly distorted as a result. Why does it matter? It shouldn’t. But people have taken to heart theidea that Fisherian p-values are inductive, and N-P error probabilities are behavioristic. Since the latter are assumed irrelevant to inference, people are taught p-values without alternative hypotheses. Power is thrown in, and the inconsistent hybrid is born. And on it goes…

You haven’t blogged on the fiducial approach here, have you? That would be interesting, even more so perhaps with some discussion by people who use the fiducial approach these days, such as Jan Hannig.

Christian:

There were a few posts:

https://errorstatistics.com/2016/02/17/cant-take-the-fiducial-out-of-fisher-if-you-want-to-understand-the-n-p-performance-philosophy-i/

https://errorstatistics.com/2016/02/20/deconstructing-the-fisher-neyman-conflict-wearing-fiducial-glasses-continued/

From Schweder and Hjort’s recent ‘Confidence, likelihood and probability’ book*:

“The present book attempts to fill this gap by promoting what Hampel (2006) calls the original and correct fiducial argument (Fisher, 1930, 1973), as opposed to Fisher’s later incorrect fiducial theory. The second decade of the second millennium is witnessing a renewed interest in fiducial analysis (see, e.g., Hannig [2009] and references therein) and in the related concept of confidence distribution (see e.g. the review and discussion paper Xie and Singh [2013]).”

*http://www.cambridge.org/catalogue/catalogue.asp?isbn=9780521861601

Om: I’m somewhat familiar with these attempts, and was at Xi’s “fusion” conference last April. I take it as a good sign that these programs are solving current problems in statistics while remaining within frequentist modeling–or so they describe it. Some of these individuals were discussants on my strong likelihood principle paper in Stat Sci. They all noted that the strong likelihood principle fails in their methods. I’m not sure which of these attempts are, like Fraser’s conf, using probability to qualify the method’s error probabilities.

From Schweder and Hjort’s recent (2016) ‘Confidence, likelihood and probability’ book:

“The present book attempts to fill this gap by promoting what Hampel (2006) calls the original and correct fiducial argument (Fisher, 1930, 1973), as opposed to Fisher’s later incorrect fiducial theory. The second decade of the second millennium is witnessing a renewed interest in fiducial analysis (see, e.g., Hannig [2009] and references therein) and in the related concept of confidence distribution (see e.g. the review and discussion paper Xie and Singh [2013]).”