*Jerzy Neyman: April 16, 1894-August 5, 1981. *This reblogs posts under “The Will to Understand Power” & “Neyman’s Nursery” here & here.

Way back when, although I’d never met him, I sent my doctoral dissertation, *Philosophy of Statistics, *to one person only: Professor Ronald Giere. (And he would read it, too!) I knew from his publications that he was a leading defender of frequentist statistical methods in philosophy of science, and that he’d worked for at time with Birnbaum in NYC.

Some ~~ten~~ 15 years ago, Giere decided to quit philosophy of statistics (while remaining in philosophy of science): I think it had to do with a certain form of statistical exile (in philosophy). He asked me if I wanted his papers—a mass of work on statistics and statistical foundations gathered over many years. Could I make a home for them? I said yes. Then came his caveat: there would be a lot of them.

As it happened, we were building a new house at the time, Thebes, and I designed a special room on the top floor that could house a dozen or so file cabinets. (I painted it pale rose, with white lacquered book shelves up to the ceiling.) Then, for more than 9 months (same as my son!), I waited . . . Several boxes finally arrived, containing hundreds of files—each meticulously labeled with titles and dates. More than that, the labels were hand-typed! I thought, If Ron knew what a slob I was, he likely would not have entrusted me with these treasures. *(Perhaps he knew of no one else who would actually want them!)*

I assumed that I knew most of the papers, certainly those by Neyman, Pearson, and Birnbaum, but the files also contained early drafts, pale mimeo versions of papers, and, best of all, hand-written comments Giere had exchanged with Birnbaum and others, before the work was all tidied-up. For a year or so, the papers received few visits. Then, in 2003, after a storm that killed our internet connection, I climbed the stairs to find an article of Birnbaum’s (more on this later).

I was flipping through some articles (that I assumed were in Neyman’s books and collected works) when I found one, then another, and then a third Neyman paper that would turn out to be dramatically at odds, philosophically—in ways large and small—from everything I had read by Neyman on Neyman and Pearson methods. (Aris Spanos and I came to refer to them as the “hidden Neyman papers,” below.) So what was so startling? Stay tuned . . .

***************

[NN2]

Let me pick up where I left off in “Neyman’s Nursery,” [built to house Giere’s statistical papers-in-exile]. The main goal of the discussion is to get us to exercise correctly our “will to understand power”, if only little by little. One of the two surprising papers I came across the night our house was hit by lightening has the tantalizing title “The Problem of Inductive Inference” (Neyman 1955). It reveals a use of statistical tests strikingly different from the long-run behavior construal most associated with Neyman. Surprising too, Neyman is talking to none other than the logical positivist philosopher of confirmation, Rudof Carnap:

I am concerned with the term “degree of confirmation” introduced by Carnap. …We have seen that the application of the locally best one-sided test to the data … failed to reject the hypothesis [that the n observations come from a source in which the null hypothesis is true]. The question is: does this result “confirm” the hypothesis that H

_{0}is true of the particular data set? (Neyman, pp 40-41).

Neyman continues:

The answer … depends very much on the exact meaning given to the words “confirmation,” “confidence,” etc. If one uses these words to describe one’s intuitive feeling of confidence in the hypothesis tested H

_{0}, then…. the attitude described is dangerous.… [T]he chance of detecting the presence [of discrepancy from the null], when only [n] observations are available, is extremely slim, even if [the discrepancy is present]. Therefore, the failure of the test to reject H_{0}cannot be reasonably considered as anything like a confirmation of H_{0}. The situation would have been radically different if the power function [corresponding to a discrepancy of interest] were, for example, greater than 0.95. (ibid.)

The general conclusion is that it is a little rash to base one’s intuitive confidence in a given hypothesis on the fact that a test failed to reject this hypothesis. A more cautious attitude would be to form one’s intuitive opinion only after studying the power function of the test applied.

Neyman alludes to a one-sided test of the mean of a Normal distribution with n iid samples, and known standard deviation, call it test T+. …

H_{0}: µ ≤ µ_{0} against H_{1}: µ > µ_{0}.

*The test statistic* d(X) is the standardized sample mean.

The test rule: Infer a (positive) discrepancy from µ_{0} iff {d(x_{0}) > cα) where cα corresponds to a difference statistically significant at the α level.

In Carnap’s example the test could not reject the null hypothesis, i.e., d(x_{0}) ≤ cα, but (to paraphrase Neyman) the problem is that the chance of detecting the presence of discrepancy δ from the null, with so few observations, is extremely slim, even if [δ is present].

We are back to our old friend: interpreting negative results!

“One may be confident in the absence of that discrepancy only if the power to detect it were high.”

The power of the test T+ to detect discrepancy δ:

(1) P(d(X) > cα; µ = µ_{0} + δ)

It is interesting to hear Neyman talk this way since it is at odds with the more behavioristic construal he usually championed. He sounds like a Cohen-style power analyst! Still, power is calculated relative to an outcome just making/missing the cutoff cα. This is, in effect, the worst case of a negative (non significant) result, and if the actual outcome corresponds to a larger p-value, that should be taken into account in interpreting the results. It is more informative, therefore, to look at the probability of getting a worse fit (with the null hypothesis) than you did:

(2) P(d(X) > d(x0); µ = µ_{0} + δ)

In this example, this gives a measure of the severity (or degree of corroboration) for the inference µ < µ_{0} + δ.

Although (1) may be low, (2) may be high (For numbers, see Mayo and Spanos 2006).

Spanos and I (Mayo and Spanos 2006) couldn’t find a term in the literature defined precisely this way–the way I’d defined it in Mayo (1996) and before. We were thinking at first of calling it “attained power” but then came across what some have called “observed power” which is very different (and very strange). Those measures are just like ordinary power but calculated assuming the value of the mean equals the observed mean! (Why anyone would want to do this and then apply power analytic reasoning is unclear. I’ll come back to this in my next post NN3.) Anyway, we refer to it as the Severity Interpretation of “Acceptance” (SIA) in Mayo and Spanos 2006.

The claim in (2) could also be made out viewing the p-value as a random variable, calculating its distribution for various alternatives (Cox 2006, 25). This reasoning yields a core frequentist principle of evidence (FEV) in Mayo and Cox 2010, 256):

FEV:^{1} A moderate p-value is evidence of the absence of a discrepancy d from H_{0} only if there is a high probability the test would have given a worse fit with H_{0} (i.e., smaller p value) were a discrepancy d to exist.

It is important to see that it is only in the case of a negative result that severity for various inferences is in the same direction as power. In the case of significant results, d(x) in excess of the cutoff, the opposite concern arises—namely, the test is too sensitive. So severity is always relative to the particular inference being entertained: speaking of the “severity of a test” simpliciter is an incomplete statement in this account. These assessments enable sidestepping classic fallacies of tests that are either too sensitive or not sensitive enough.^{2}

________________________________________

2 The full version of our frequentist principle of evidence FEV corresponds to the interpretation of a small p-value:

x is evidence of a discrepancy d from H_{0} iff, if H_{0} is a correct description of the mechanism generating x, then, with high probability a less discordant result would have occurred.

Severity (SEV) may be seen as a meta-statistical principle that follows the same logic as FEV reasoning within the formal statistical analysis.

By making a SEV assessment relevant to the inference under consideration, we obtain a measure where high (low) values always correspond to good (poor) evidential warrant.

It didn’t have to be done this way, but I decided it was best, even though it means appropriately swapping out the claim H for which one wants to assess SEV.

NOTE: There are 5 Neyman’s Nursery posts total (NN1-NN5). Search this blog for the 3 others (all relating to power).

NN3:

https://errorstatistics.com/2011/11/12/neymans-nursery-nn-3-shpower-vs-power/

REFERENCES:

Cohen, J. (1988), *Statistical Power Analysis for the Behavioral Sciences*, 2^{nd} ed. Hillsdale, Erlbaum, NJ.

Mayo, D.G. and Cox, D. R. (2006) “Frequentists Statistics as a Theory of Inductive Inference,” *Optimality: The Second Erich L. Lehmann Symposium *(ed. J. Rojo), Lecture Notes-Monograph series, Institute of Mathematical Statistics (IMS), Vol. 49: 77-97.

Mayo, D. G. and Spanos, A. (2006). “Severe Testing as a Basic Concept in a Neyman-Pearson Philosophy of Induction,” *British Journal of Philosophy of Science*, 57: 323-357.

Mayo, D. G. and Spanos, A. (2010). “Introduction and Background: Part I: Central Goals, Themes, and Questions; Part II The Error-Statistical Philosophy” in *Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science* (D Mayo and A. Spanos eds.), Cambridge: CUP: 1-14, 15-27.

Neyman, J. (1955), “The Problem of Inductive Inference,” *Communications on Pure and Applied Mathematics*, VIII, 13-46.

I really love the Neyman’s Nursery posts — I find the history of statistics fascinating, but I rarely encounter it in my applied work. I especially appreciate learning about this paper of Neyman’s, since it does so much to address and correct the perception of statistical hypothesis testing as a kind of thought-free, robotic exercise.

(As a Bayesian, I have my own criticisms of the approach, but I prefer to aim my criticisms at the best version of the approach rather than a strawman.)

Corey: I’m so glad to hear it, so thanks for writing. As de Finetti (1972) put it, “inductive behavior…that was for Neyman simply a slogan underlining and explaining the difference between his, the Bayesian and the Fisherian formulations” became, with Abraham Wald’s work, “something much more substantial (176).” De Finetti called this “the involuntarily destructive aspect of Wald’s work” (ibid.). What was intended as a mere metaphor has become, with Neyman-Pearson critics, all too real. Lehmann’s neat decision-theoretical formulation early on didn’t help this impression, even though he was even less behavioristic than Neyman (about statistical inference).

To my surprise, when I revisited Neyman papers that I’d read dozens of times before, lo and behold! I found this post-data, inferential use of power in a sentence or two. e.g., In Neyman’s response to Fisher (1956) [part of what i call the Fisher-Pearson-Neyman “triad”).

https://errorstatistics.com/2012/02/11/jerzy-neyman-note-on-an-article-by-sir-ronald-fisher/

Have the Giere files been maintained in the good order that was expected? Does each folder retain its proper typed label? Would Neyman’s nursery pass a surprise (severe) inspection?

no