An argument that assumes the very thing that was to have been argued for is guilty of *begging the question*; signing on to an argument whose conclusion you favor even though you cannot defend its premises is to argue *unsoundly*, and in bad faith. When a whirlpool of “reforms” subliminally alter the nature and goals of a method, falling into these sins can be quite inadvertent. Start with a simple point on defining the power of a statistical test.

**I. Redefine Power?**

Given that power is one of the most confused concepts from Neyman-Pearson (N-P) frequentist testing, it’s troubling that in “Redefine Statistical Significance”, power gets redefined too. “Power,” we’re told, is a Bayes Factor BF “obtained by defining *H*_{1} as putting ½ probability on μ = ± m for the value of m that gives 75% power for the test of size α = 0.05. This *H*_{1} represents an effect size typical of that which is implicitly assumed by researchers during experimental design.” (material under Figure 1). Continue reading