An argument that assumes the very thing that was to have been argued for is guilty of *begging the question*; signing on to an argument whose conclusion you favor even though you cannot defend its premises is to argue *unsoundly*, and in bad faith. When a whirlpool of “reforms” subliminally alter the nature and goals of a method, falling into these sins can be quite inadvertent. Start with a simple point on defining the power of a statistical test.

**I. Redefine Power?**

Given that power is one of the most confused concepts from Neyman-Pearson (N-P) frequentist testing, it’s troubling that in “Redefine Statistical Significance”, power gets redefined too. “Power,” we’re told, is a Bayes Factor BF “obtained by defining *H*_{1} as putting ½ probability on μ = ± m for the value of m that gives 75% power for the test of size α = 0.05. This *H*_{1} represents an effect size typical of that which is implicitly assumed by researchers during experimental design.” (material under Figure 1).

The Bayes factor discussed is of *H*_{1} over *H*_{0}, in two-sided Normal testing of *H*_{0}: μ = 0 versus *H*_{1}: μ ≠ 0.

“The variance of the observations is known. Without loss of generality, we assume that the variance is 1, and the sample size is also 1.” (p. 2 supplementary)

“This is achieved by assuming that μ under the alternative hypothesis is equal to ± (z

_{0.025}+ z_{0.75}) = ± 2.63 [1.96. + .63]. That is, the alternative hypothesis places ½ its prior mass on 2.63 and ½ its mass on -2.63”. (p. 2 supplementary)

Putting to one side whether this is “without loss of generality”, the use of “power” is quite different from the correct definition. The power of a test T (with type I error probability α) to detect a discrepancy μ’ is the probability T generates an observed difference that is statistically significant at level α, assuming μ = μ’. The value z = 2.63 comes from the fact that the alternative against which this test has power .75 is the value .63 SE in excess of the cut-off for rejection. (Since an SE is 1, they add .63 to 1.96.) I don’t really see why it’s advantageous to ride roughshod on the definition of power, and it’s not the main point of this blogpost, but it’s worth noting if you’re to avoid sinking into the quicksand.

Let’s distinguish the appropriateness of the test for a Bayesian, from its appropriateness as a criticism of significance tests. The latter is my sole focus. The criticism is that, at least if we accept these Bayesian assignments of priors, the posterior probability on *H*_{0} will be larger than the p-value. So if you were to interpret a p-value as a posterior on* H*_{0 }(a fallacy) or if you felt intuitively that a .05 (2-sided) statistically significant result should correspond to something closer to a .05 posterior on* H*_{0}, you should instead use a p-value of .005–or so it is argued. I’m not sure of the posterior on *H*_{0, }but the BF is between around 14 and 26.[1] That is the argument. If you lower the required p-value, it won’t be so easy to get statistical significance, and irreplicable results won’t be as common. [2]

The alternative corresponding to the preferred p =.005 requirement

“corresponds to a classical, two-sided test of size α = 0.005. The alternative hypothesis for this Bayesian test places ½ mass at 2.81 and ½ mass at -2.81. The null hypothesis for this test is rejected if the Bayes factor exceeds 25.7. Note that this curve is nearly identical to the “power” curve if that curve had been defined using 80% power, rather than 75% power. The Power curve for 80% power would place ½ its mass at ±2.80”. (Supplementary, p. 2)

z = 2.8 comes from adding .84 SE to the cut-off: 1.96 SE +.84 SE = 2.8. This gets to the alternative vs which the α = 0.05 test has 80% power. (See my previous post on power.)

Is this a good form of inference from the Bayesian perspective? (Why are we comparing μ = 0 and μ = 2.8?). As is always the case with “p-values exaggerate” arguments, there’s the supposition that testing should be on a point null hypothesis, with a lump of prior probability given to *H*_{0} (or to a region around 0 so small that it’s indistinguishable from 0). I leave those concerns for Bayesians, and I’m curious to hear from you. More importantly, does it constitute a relevant and sound criticism of significance testing? Let’s be clear: a tester might well have her own reasons for preferring z = 2.8 rather than z = 1.96, but that’s not the question. The question is whether they’ve provided a good argument for the significance tester to do so?

**II. What might the significance tester say?**

For starters, when she sets .8 power to detect a discrepancy, she doesn’t “implicitly assume” it’s a plausible population discrepancy, but simply one she wants the test to detect by producing a statistically significant difference (with probability .8). And if the test does produce a difference that differs statistically significantly from *H*_{0}, she does not infer the alternative against which the test had high power, call it μ’. (The supposition that she does grows out of fallaciously transposing “the conditional” involved in power.) Such a rule of interpreting data would have a high error probability of erroneously inferring a discrepancy μ’ (here 2.8).

The significance tester merely seeks evidence of *some* (genuine) discrepancy from 0, and eschews a comparative inference such as the ratio of the probability of the data under the points 0 and 2.63 (or 2.8). I don’t say there’s no role for a comparative inference, nor preclude someone arguing it is comparing how well μ = 2.8 “explains” the data compared to μ = 0 (given the assumptions), but the form of inference is so different from significance testing, it’s hard to compare them. She definitely wouldn’t ignore all the points in between 0 and 2.8. A one-sided test is preferable (unless the direction of discrepancy is of no interest). While one or two-sided doesn’t make that much difference for a significance tester, it makes a big difference for the type of Bayesian analyses that is appealed to in the “p-values exaggerate” literature. That’s because a lump prior, often .5 (but here .9!), is placed on the point 0 null. Without the lump, the p-value tends to be close to the posterior probability for *H*_{0}, as Casella and Berger (1987a,b) show–even though p-values and posteriors are actually measuring very different things.

“In fact it is not the case that P-values are too small, but rather that Bayes point null posterior probabilities are much too big!….Our concern should not be to analyze these misspecified problems, but to educate the user so that the hypotheses are properly formulated,” (Casella and Berger 1987 b, p. 334, p. 335).

There is a long and old literature on all this (at least since Edwards, Lindman and Savage 1963–let me know if you’re aware of older sources).

Those who lodge the “p-values exaggerate” critique often say, we’re just showing what would happen even if we made the strongest case for the alternative. No they’re not. They wouldn’t be putting the lump prior on 0 were they concerned not to bias things in favor of the null, and they wouldn’t be looking to compare 0 with so far away an alternative as 2.8 either.

The only way a significance tester can appraise or calibrate a measure such as a BF (and these will differ depending on the alternative picked) is to view it as a statistic and consider the probability of an even larger BF under varying assumptions about the value of μ. This is an error probability associated with the method. Accounts that appraise inferences according to the error probability of the method used I call *error statistical* (which is less equivocal than frequentist or other terms.)

For example, rejecting *H*_{0} when z ≥ 1.96 (which is the .05 test, since they make it 2-sided), we said, had .8 power to detect μ = 2.8, but with the .005 test it has only 50% power to do so. If one insists on a fixed .005 cut-off, this is construed as no evidence against the null (or even evidence *for* it–for a Bayesian). The new test has only 30% probability of finding significance were the data generated by μ = 2.3. So the significance tester is rightly troubled by the raised type II error [3], although the members of an imaginary Toxic Co. (having the risks of their technology probed) might be happy as clams.[4]

Suppose we do attain statistical significance at the recommended .005 level, say z = 2.8. The BF advocate assures us we can infer μ = 2.8, which is now 25 times as likely as μ = 0, (if all the various Bayesian assignments hold). The trouble is, the significance tester doesn’t want to claim good evidence for μ = 2.8. The significance tester merely infers an indication of a discrepancy (an isolated low p-value doesn’t suffice, and the assumptions also must be checked). She’d never ignore all the points other than 0 and ± 2.8. Suppose we were testing μ ≤ 2.7 vs. μ > 2.7, and observed z = 2.8. What is the p-value associated with this observed difference? The answer is ~.46. (Her inferences are not in terms of points but of discrepancies from the null, but I’m trying to relate the criticism to significance tests. ) To obtain μ ≥ 2.7 using one-sided confidence intervals would require a confidence level of .~~46~~ .54. An absurdly low confidence level/high error probability.

The one-sided lower .975 bound with z = 2.8 would only entitle inferring μ > .84 (2.8 – 1.96)–quite a bit smaller than inferring μ = 2.8. If confidence levels are altered as well (and I don’t see why they wouldn’t be), the one-sided lower .995 bound would only be μ > 0. Thus, while the lump prior on *H*_{0 }results in a bias in favor of a null–increasing the type II error probability– it’s of interest to note that achieving the recommended p-value licenses an inference much *larger* than what the significance tester would allow.

Note, their inferences remain comparative in the sense of “*H*_{1} over *H*_{0}” on a given measure, it doesn’t actually say there’s evidence against (or for) either (unless it goes on to compute a posterior, not just odds ratios or BFs), nor does it falsify either hypothesis. This just underscores the fact that the BF comparative inference is importantly different from significance tests which seek to falsify a null hypothesis, with a view toward learning if there are genuine discrepancies, and if so, their magnitude.

Significance tests do not assign probabilities to these parametric hypotheses, but even if one wanted to, the spiked priors needed for the criticism are questioned by Bayesians and frequentists alike. Casella and Berger (1987a) say that “concentrating mass on the point null hypothesis is biasing the prior in favor of *H*_{0} as much as possible” (p. 111) whether in one or two-sided tests. According to them “The testing of a point null hypothesis is one of the most misused statistical procedures.” (ibid., p. 106)

**III. Why significance testers should reject the “redefine statistical significance” argument:**

**(i)** If you endorse this particular Bayesian way of attaining the BF, fine, but then your argument *begs the central question* against the significance tester (or of the confidence interval estimator, for that matter). The significance tester is free to turn the situation around, as Fisher does, as refuting the assumptions:

Even if one were to imagine that *H*_{0} had an extremely high prior probability, says Fisher—never minding “what such a statement of probability a priori could possibly mean”(Fisher, 1973, p.42)—the resulting high posteriori probability to *H*_{0} , he thinks, would only show that “reluctance to accept a hypothesis strongly contradicted by a test of significance” (ibid., p. 44) … “…is not capable of finding expression in any calculation of probability a posteriori” (ibid., p. 43). Indeed, if one were to consider the claim about the priori probability to be itself a hypothesis, Fisher says, “it would be rejected at once by the observations at a level of significance almost as great [as reached by *H*_{0} ]. …Were such a conflict of evidence, as has here been imagined under discussion… in a scientific laboratory, it would, I suggest, be some prior assumption…that would certainly be impugned.” (p. 44)

**(ii)** Suppose, on the other hand, you don’t endorse these priors or the Bayesian computation on which the “redefine significance” argument turns. Since lowering the p-value cut-off doesn’t seem too harmful, you might tend to look the other way as to the argument on which it is based. Isn’t that OK? Not unless you’re prepared to have your students compute these BFs and/or posteriors in just the manner upon which the critique of significance tests rests. Will you say, “oh that was just for criticism, not for actual use”? Unless you’re prepared to defend the statistical analysis, you shouldn’t support it. Lowering the p-value that you require for evidence of a discrepancy, or getting more data (should you wish to do so) doesn’t require it.

Moreover, your student might point out that you still haven’t matched p-values and BFs (or posteriors on *H*_{0} ): They still differ, with the p-value being smaller. If you wanted to match the p-value and the posterior, you could do so very easily: use the frequency matching priors (which doesn’t use the spike). You could still lower the p-value to .005, and obtain a rejection region precisely identical to the Bayesian. *Why isn’t that a better solution than one based on a conflicting account of statistical inference?*

Of course, even that is to grant the problem as put before us by the Bayesian argument. If you’re following good error statistical practice you might instead shirk all cut-offs. You’d report attained p-values, and wouldn’t infer a genuine effect until you’ve satisfied Fisher’s requirements: (a) Replicate yourself, show you can bring about results that “rarely fail to give us a statistically significant result” (1947, p. 14) and that you’re getting better at understanding the causal phenomenon involved. (b) Check your assumptions: both the statistical model, the measurements, and the links between statistical measurements and research claims. (c) Make sure you adjust your error probabilities to take account of, or at least report, biasing selection effects (from cherry-picking, trying and trying again, multiple testing, flexible determinations, post-data subgroups)–according to the context. That’s what prespecified reports are to inform you of. The suggestion that these are somehow taken care of by adjusting the pool of hypotheses on which you base a prior will not do. (It’s their plausibility that often makes them so seductive, and anyway, the injury is to how well-tested claims are, not to their prior believability.) The appeal to diagnostic testing computations of “false positive rates” in this paper opens up a whole new urn of worms. Don’t get me started. (see related posts.)

A final word is from a guest post by Senn. Harold Jeffreys, he says, held that if you use the spike (which he introduced), you are to infer the hypothesis that achieves greater than .5 posterior probability.

Within the Bayesian framework, in abandoning smooth priors for lump priors, it is also necessary to change the probability standard. (In fact I speculate that the 1 in 20 standard seemed reasonable partly because of the smooth prior.) … A parsimony principle is used on the prior distribution. You can’t use it again on the posterior distribution. Once that is calculated, you should simply prefer the more probable model. The error that is made is not only to assume that P-values should be what they are not but that when one tries to interpret them in the way that one should not, the previous calibration survives.

It is as if in giving recommendations in dosing children one abandoned a formula based on age and adopted one based on weight but insisted on using the same number of kg one had used for years.

Error probabilities are not posterior probabilities. Certainly, there is much more to statistical analysis than P-values but they should be left alone rather than being deformed in some way to become second class Bayesian posterior probabilities. (Senn)

Please share your views, and alert me to errors. I will likely update this. Stay tuned for asterisks.

12/17 * I’ve already corrected a few typos.

[1] I do not mean the “false positive rate” defined in terms of α and (1 – β)–a problematic animal I put to one side here (Mayo 2003). Richard Morey notes that using their prior odds of 1:10, even the recommended BF of 26 gives us an unimpressive posterior odds ratio of 2.6 (email correspondence).

[2] Note what I call the “fallacy of replication”. It’s said to be too easy to get low p-values, but at the same time it’s too hard to get low p-values in replication. Is it too easy or too hard? That just shows it’s not the p-value at fault but cherry-picking and other biasing selection effects. Replicating a p-value is hard–when you’ve cheated or been sloppy the first time.

[3] They suggest increasing the sample size to get the power where it was with rejection at z = 1.96, and, while this is possible in some cases, increasing the sample size changes what counts as one sample. As n increases the discrepancy indicated by any level of significance decreases.

[4] The severe tester would report attained levels and,in this case, would indicate the the discrepancies indicated and ruled out with reasonable severity. (Mayo and Spanos 2011). Keep in mind that statistical testing inferences are in the form of µ > µ’ =µ_{0 }+ δ, or µ ≤ µ’ =µ_{0 }+ δ or the like. They are *not* to point values. As for the imaginary Toxic Co., I’d put the existence of a risk of interest in the null hypothesis of a one-sided test.

**Related Posts**

10/26/17: Going round and round again: a roundtable on reproducibility & lowering p-values

10/18/17: Deconstructing “A World Beyond P-values”

1/19/17: The “P-values overstate the evidence against the null” fallacy

8/28/16 Tragicomedy hour: p-values vs posterior probabilities vs diagnostic error rates

12/20/15 Senn: Double Jeopardy: Judge Jeffreys Upholds the Law, sequel to the pathetic p-value.

2/1/14 Comedy hour at the Bayesian epistemology retreat: highly probable vs highly probed vs B-boosts

11/25/14: How likelihoodists exaggerate evidence from statistical tests

Elements of this post are from Mayo 2018.

**References**

Benjamin, D. J., Berger, J., Johannesson, M., Nosek, B. A., Wagenmakers, E.-J., Berk, R., 3 … Johnson, V. (2017, July 22), “Redefine statistical significance“, *Nature Human Behavior*.

Berger, J. O. and Delampady, M. (1987). “Testing Precise Hypotheses” and “Rejoinder“, *Statistical Science* **2**(3), 317-335.

Berger, J. O. and Sellke, T. (1987). “Testing a point null hypothesis: The irreconcilability of *p *values and evidence,” (with discussion). *J. Amer. Statist. Assoc. ***82: **112–139.

Cassella G. and Berger, R. (1987a). “Reconciling Bayesian and Frequentist Evidence in the One-sided Testing Problem,” (with discussion). *J. Amer. Statist. Assoc. ***82 **106–111, 123–139.

Cassella, G. and Berger, R. (1987b). “Comment on Testing Precise Hypotheses by J. O. Berger and M. Delampady”, *Statistical Science* **2**(3), 344–347.

Edwards, W., Lindman, H. and Savage, L. (1963). “Bayesian Statistical Inference for Psychological Research”, Psychological Review 70(3): 193-242.

Fisher, R. A. (1947). *The Design of Experiments *(4^{th} ed.). Edinburgh: Oliver and Boyd. (First published 1935).

Fisher, R. A. (1973). *Statistical Methods and Scientific Inference,* 3rd ed, New York: Hafner Press.

Ghosh, J. Delampady, M., and Samanta, T. (2006). *An Introduction to Bayesian Analysis: Theory and Methods*. New York: Springer.

Mayo, D. G. (2003). “Could Fisher, Jeffreys and Neyman have Agreed on Testing? Commentary on J. Berger’s Fisher Address,” *Statistical Science* 18: 19-24.

Mayo (2018), *Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars*. Cambridge (June 2018)

Mayo, D. G. and Spanos, A. (2011) “Error Statistics” in *Philosophy of Statistics , Handbook of Philosophy of Science* Volume 7 *Philosophy of Statistics*, (General editors: Dov M. Gabbay, Paul Thagard and John Woods; Volume eds. Prasanta S. Bandyopadhyay and Malcolm R. Forster.) Elsevier: 1-46.

“Make sure you adjust your error probabilities to take account of, or at least report, biasing selection effects (from cherry-picking, trying and trying again, multiple testing, flexible determinations, post-data subgroups)–according to the context. ”

In the real world of experimental results, there is a cascade of variability:

1) Variability within the sample;

2) Variability from sampling statistics, assuming all samples come from the same population;

3) Variability that arise when the samples don’t represent the overall population (e.g., U.S. college students vs Kenyan farmers vs the entire world);

4) Variability arising from a post-analysis choice from a range of hypotheses (the “garden of forking paths”);

5) Variations from choice of subgroups in subgroup analysis.

Significance testing only covers 1) and 2). No matter how much clarity we get on 1) and 2), and no matter how we tinker with the p-value cutoff or bayesian prior, we can’t fix up problems arising from 3) – 5). 3) can’t be resolved without a lot more testing, or a lot better theoretical support, than we usually can have. 4) and 5) depend on the integrity and rigor of the authors of the papers based on an experiment – and remember that authors of high personal integrity can fall under the spell of wishful thinking.

3) through 5) are potentially much larger than 1) and 2).

At the same time, although the garden of forking paths is easy to excoriate, when an experiment produces unexpected results, we probably do want new hypotheses to be thought of. That’s one of the ways that science advances. It’s just that when we generate a hypothesis to explain data, then we have overfit that data, and need new data to test the new idea.

Tom: Right, if you look narrowly at statistical tests as covering just a limited formal component. As I said, one also needs to check assumptions (of the stat models, the measurements, the links between stat and substantive claims), and have self-replicated. Focusing on a p-value adjustment as driven by a Bayesian analysis that conflicts with the error statistical one is detrimental.

Can anyone tell me what the recommended form of inference is? The BF requires an alternative, so what’s the alternative? ( mu = x-bar? 2.6? 2.8?) Is the inference just to “there’s some non-zero difference? (as with a strict 2-sided test). I don’t mean the false positive rate, as they compute it, just the ordinary recommended Bayesian BF analysis. Say we reject the null with X-bar = 2.8, reaching p =.005.

Let me modify my comment from yesterday:

I happened to come across a blogpost that made this remark: Placing a large prior on implausibly large effects results in lower posterior for the alternative. https://www.aarondefazio.com/tangentially/?p=90

I’m not saying that this is what the authors are doing precisely, I’m primarily pointing out a fact that might seem the opposite of what’s thought. (i.e., It might be thought it gives weight to the discrepancy.) The author himself is pointing this out as an ironic fact about Bayes Factors.

Now without claiming this is happening here, I do think there’s a tendency to suppose that assigning high power to an alternative suggests that that’s the kind of discrepancy considered plausible if there is a discrepancy (i.e., under the alternative). It goes with the claim that “low statistical power and alpha = .05 combine to produce high false positive rates”(second page of paper). This doesn’t make sense to an error statistician. It’s using power/alpha as a kind of likelihood ratio for a Bayesian computation. (I have several posts on this.)

Even though the post says I’m not getting into the false positive rate business, let me just ask one thing:

The test has huge power to detect a 5 SE population discrepancy. Why would that mean it had low false positive rates? If you agree with me, then you see what’s wrong with this false positive rate computation. It makes no sense.

The second point in this comment was: Don’t forget the BF test is merely comparative, and having given a lump to the 0 null, it’s asking whether 0 or an alternative with large discrepancy fits the data better. Consider a discrepancy against which the test has huge power, like 5SE. If the only choices were 0 and 5SE, then 0 wins out. We aren’t given the error probabilities associates with the comparison.

Data Colada had a post awhile back, that The Default Bayesian Test is Prejudiced Against Small Effects: http://datacolada.org/35 “Intuitively (but not literally), that default means the Bayesian test ends up asking: ‘is the effect zero, or is it biggish?’ When the effect is neither, when it’s small, the Bayesian test ends up concluding (erroneously) it’s zero.”

Is not the False Discovery Rate more to the point of what we are interested in? I see zero reason to put a prior on a null, and am puzzled so many seem to think a single hypothesis test is conclusive. Finally, why do the critics of statistical testing not seen the bucket of problems with “objective” (read, “meaningless”) priors? Like that solves anything…

John: I’m not sure how you’re defining FDR, as it’s sometimes used as a posterior unlike Benjamini and Hochberg. I agree with the rest of what you say.

I define it as Soric as well as Benjamini/Hochberg. Of course, it is only helpful when testing many hypotheses, but it provides clarity of thinking in those situations, as noted by Efron.

It seems strange to claim that the P-value is biased against the null hypothesis when Fisher designed it to be the exact opposite. Null hypothesis significance testing assumes the null hypothesis is true and then tests the evidence (manifest as the P-value) to see if it is strong enough to refute this proposition. Hence Fisher talks about rejecting the null hypothesis, but never about accepting it – it is implicitly accepted refuted. One can construct a likelihood function using the P-value and derive the maximum likelihood estimate (MLE)for the effect size (see Adams and O’Reilly, J Clin Epidemiol Dec 2017). The MLE is zero until the P-value is >0.5 and only approaches the observed effect size when the P-value is <0.2. That is decide bias towards the null hypothesis.

Nick: The lump prior to the point null is said to be biased in favor of it. There are a number of other points in your comment I can’t disambiguate.

Link to article: https://authors.elsevier.com/a/1WALH3BcJPr490

insert ‘unless’ before refuted, ‘decided’ instead of decide

I found this slide presentation by Felix Schoenbrodt to be helpful: https://t.co/3r22X6K9Yu You make an appearance on slide 25 as does Richard Morey.

Isn’t Morey’s point really the Fisherian essence of the whole problem – that a single experimental never warrants shouting “Eureka”? And if the logical positivists got it wrong didn’t A.B. Hill get it right when he said that following repeated observations of the sun invariably rising a reasonable person decides that tomorrow’s probably another work day and prepares accordingly?

I haven’t harped on this in the current post, but with the exception of a case where there was no interest in the sign at all, the result of a p-value analysis is not a two sided inference: “there’s an effect in either direction. The significance tester conducts two one-sided tests and adjusts for selection, doubling the p-value”. She makes the inference in the direction observed. Now the Bayesian does not have to adjust for selection–it becomes especially ironic that the same people who argue that optional stopping doesn’t matter, and we can try and try again until we get a significant result–argue that significance testers exaggerate evidence.

View at Medium.com

Richard Morey points out that because the Bayesian doesn’t have to take selection effects into account, she “can compute a one-sided Bayes factor from a two-sided one by ‘boosting’ the evidence in favor of the sign that was consistent with the data. The correction factor will be related to the posterior probability of that sign. Under the models described in the RSS team’s paper, for significant p values the correction factor will be about 2. When the RSS team says we have two-sided Bayes factor of about 5 — which they would not call ‘strong’ evidence — we actually know we have useful, one-sided Bayes factor of about 10, which they would call ‘strong’.

… In the worst case scenario, …a ‘strong’ one-sided Bayes factor of 10 corresponds to a p value of about .02; in the best case, about .03.”

So the Bayesian argument actually shows that the usual bound of requiring a p-value of ~.025 is strong evidence for the inference that both the Bayesian and the significance tester would reach (with the exception of cases where there was truly no interest in the sign and a two-sided “there is evidence of some difference in either direction” was all that was inferred).

Morey can tell me whether this is correct. I will modify this accordingly.

However, he was imagining the Bayesian would settle for just inferring (in the case where z = 2.8) that mu > 0, whereas in fact she would infer mu is as large as 2.8 with high probability (.95 or .995). This method would have an error probability of ~.5, as argued in my post.

It will not let me paste the address to Morey’s site.

I have posted the blog here: Redefining statistical significance: the statistical arguments (Part two of a three part series).