The most surprising discovery about today’s statistics wars is that some who set out shingles as “statistical reformers” themselves are guilty of misdefining some of the basic concepts of error statistical tests—notably power. (See my recent post on power howlers.) A major purpose of my *Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars* (2018, CUP) is to clarify basic notions to get beyond what I call “chestnuts” and “howlers” of tests. The only way that disputing tribes can get beyond the statistics wars is by (at least) understanding correctly the central concepts. But these misunderstandings are *more* common than ever, so I’m asking readers to help. Why are they more common (than before the “new reformers” of the last decade)? I suspect that at least one reason is the popularity of Bayesian variants on tests: if one is looking to find posterior probabilities of hypotheses, then error statistical ingredients may tend to look as if that’s what they supply.

Run a little experiment if you come across a criticism based on the power of a test. Ask: are the critics interpreting the power of a test (with null hypothesis H) against an alternative H’ as if it were a posterior probability on H’? If they are, then it’s fallacious. But it will help understand why some people claim that high power against H’ warrants a stronger indication of a discrepancy H’, upon getting a just statistically significant result. But this is wrong. (See my recent post on power howlers.)

I had a blogpost on Ziliac and McCloskey (2008) (Z & M)on power (from Oct. 2011), following a review of their book by Aris Spanos (2008). They write:

“The error of the second kind is the error of accepting the null hypothesis of (say) zero effect when the null is in face false, that is, when (say) such and such a positive effect is true.”

So far so good, keeping in mind that “positive effect” refers to a parameter discrepancy, say δ, not an observed difference.

And the power of a test to detect that such and such a positive effect δ is true is equal to the probability of rejecting the null hypothesis of (say) zero effect when the null is in fact false, and a positive effect as large as δ is present.

Fine. Let this alternative be abbreviated H’(δ):

H’(δ): there is a positive (population) effect at least as large as δ.

Suppose the test rejects the null when it reaches a significance level of .01 (nothing turns on the small value chosen).

(1) The power of the test to detect H’(δ) =

Pr(test rejects null at the .01 level| H’(δ) is true).

Say it is 0.85.

According to Z & M:

“[If] the power of a test is high, say, 0.85 or higher, then the scientist can be reasonably confident that at minimum the null hypothesis (of, again, zero effect if that is the null chosen) is false and that therefore his rejection of it is highly probably correct.” (Z & M, 132-3)

But this is not so. They are mistaking (1), defining power, as giving a posterior probability of .85 to H’(δ)! That is, (1) is being transformed to (1′):

(1’) Pr(H’(δ) is true| test rejects null at .01 level)=.85!

(I am using the symbol for conditional probability “|” all the way through for ease in following the argument, even though, strictly speaking, the error statistician would use “;”, abbreviating “under the assumption that”). Or to put this in other words, they argue:

1. Pr(test rejects the null | H’(δ) is true) = 0.85.

2. Test rejects the null hypothesis.

Therefore, the rejection is probably correct, e.g., the probability H’ is true is 0.85.

Oops. Premises 1 and 2 are true, but the conclusion fallaciously replaces premise 1 with 1′.

As Aris Spanos (2008) points out, “They have it *backwards*”. Extracting from a Spanos comment on this blog in 2011:

“When [Ziliak and McCloskey] claim that: ‘What is relevant here for the statistical case is that refutations of the null are trivially easy to achieve if power is low enough or the sample size is large enough.’ (Z & M, p. 152), they exhibit [confusion] about the notion of power and its relationship to the sample size; their two instances of ‘easy rejection’ separated by ‘or’ contradict each other! Rejections of the null are not easy to achieve when the power is ‘low enough’. They are more difficult exactly because the test does not have adequate power (generic capacity) to detect discrepancies from the null; that stems from the very definition of power and optimal tests. [Their second claim] is correct for the wrong reason. Rejections are easy to achieve when the sample size n is large enough due to

high not low power. This is because the power of a ‘decent’ (consistent) frequentist test increases monotonically with n!” (Spanos 2011)

However, their slippery slides are very illuminating for common misinterpretations behind the criticisms of statistical significance tests–assuming a reader can catch them, because they only make them some of the time. [i] According to Ziliak and McCloskey (2008): “It is the history of Fisher significance testing. One erects little “significance” hurdles, six inches tall, and makes a great show of leaping over them, . . . If a test does a good job of uncovering efficacy, then the test has high power and the hurdles are high not low.” (ibid., p. 133)

They construe “little significance” as little hurdles! It explains how they wound up supposing high power translates into high hurdles. It’s the opposite. The higher the hurdle required before rejecting the null, the more difficult it is to reject, and the lower the power. High hurdles correspond to insensitive tests, like insensitive fire alarms. It might be that using “sensitivity” rather than power would make this abundantly clear. We may coin: The high power = high hurdle (for rejection) fallacy. A powerful test does give the null hypothesis a harder time in the sense that it’s more probable that discrepancies from it are detected. That makes it easier to infer H_{1}. Z & M have their hurdles in a twist.

For a fuller discussion, see this link to Excursion 5 Tour I of SIST (2018). [ii] [iii]

**What power howlers have you found? Share them in the comments. **

Spanos, A. (2008), Review of S. Ziliak and D. McCloskey’s *The Cult of Statistical Significance*, *Erasmus Journal for Philosophy and Economics*, volume 1, issue 1: 154-164.

Ziliak, Z. and McCloskey, D. (2008), *The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice and Lives*, University of Michigan Press*.*

[i] When it comes to raising the power by increasing sample size, they often make true claims, so it’s odd when there’s a switch or mixture, as when they say “refutations of the null are trivially easy to achieve if power is low enough or the sample size is large enough”. (Z & M, p. 152) It is clear that “low” is not a typo here either (as I at first assumed), so it’s mysterious.

[ii] Remember that a power computation is not the probability of data x under some alternative hypothesis, it’s the probability that data fall in the rejection region of a test under some alternative hypothesis. In terms of a test statistic d(X), it is Pr(test statistic d(X) is statistically significant | H’ true), at a given level of significance. So it’s the probability of getting any of the outcomes that would lead to statistical significance at the chosen level, under the assumption that alternative H’ is true. The alternative H’ used to compute power is a point in the alternative region. However, the inference that is made in tests is not to a point hypothesis but to an inequality, e.g., θ > θ’.

[iii] My rendering of their fallacy above sees it as a type of affirming the consequent. To Z & M, “the so-called fallacy of affirming the consequent may not be a fallacy at all in a science that is serious about decisions and belief.” It is, they think, how Bayesians reason. They are right that if inference is by way of a Bayes boost, then affirming the consequent is not a fallacy. A hypothesis H that entails data x will get a “B-boost” from x, unless its probability is already 1. The error statistician objects that the probability of finding an H that perfectly fits x is high, even if H is false–but the Bayesian need not object if she isn’t in the business of error probabilities. The trouble erupts when Z & M take an error statistical concept like power, and construe it Bayesianly. Even more confusing, they only do so some of the time.

Looking for the source of this common fallacy, aside from the argument described in this post, some seem to be thinking that if H’ is inferred on the basis of a “powerful” test, then (it sounds like) H’ must have undergone a probative examination–or something like that. But raising the power of a test only raises the probative scrutiny that the null Ho is put to–not alternative H’. That’s one reason I suggest that high “sensitivity” (for problems with Ho) is less likely to lead to this confusion. But there’s no excuse for those claiming to be statistical reformers to make this mistake. There is at least one other explanation, and I’ll consider it in my next post or the one after.

In my 49 year career as a professional statistician I have come across many statistical howlers but have rarely encountered odd thinking as exhibited by Z and Mc. This is probably because I have always worked at the practical, rather than the theoretical, arena. What is really alien to my way of thinking and, I guess, is the point you are making is the two step thought process:

Step 1: Do a statistical test and get a result.

Step 2: Consider the probability that the result in step 1 is correct.

It is the second step that doesn’t fit with my training or understanding.

Z & M, 132-3: This statement is nonsense for the following reason: Whatever the true value of Delta, let’s call it Delta(true), there will always be a Delta, let’s call it Delta(0.85), for which the power is 85% or higher. This is the case even when Delta(true) is equal to zero. So, whenever a test rejects the null, Z &M will always conclude that this rejection is highly probably correct, even when the null is true.