Monthly Archives: December 2025

December leisurely cruise “It’s the Methods, Stupid!” Excursion 3 Tour II (3.4-3.6)

2024 Cruise

Welcome to the December leisurely cruise:
Wherever we are sailing, assume that it’s warm, warm, warm (not like today in NYC). This is an overview of our first set of readings for December from my Statistical Inference as Severe Testing: How to get beyond the statistics wars (CUP 2018): [SIST]–Excursion 3 Tour II. This leisurely cruise, participants know, is intended to take a whole month to cover one week of readings from my 2020 LSE Seminars, except for December and January which double up. 

What do you think of  “3.6 Hocus-Pocus: P-values Are Not Error probabilities, Are Not Even Frequentist”? This section refers to Jim Berger’s famous attempted unification of Jeffreys, Neyman and Fisher in 2003. The unification considers testing 2 simple hypotheses using a random sample from a Normal distribution, computing their two P-values, rejecting whichever gets a smaller P-value, and then computing its posterior probability, assuming each gets a prior of .5. This becomes what he calls the “Bayesian error probability” upon which he defines “the frequentist principle”. On Berger’s reading of an important paper* by Neyman (1977), Neyman criticized p-values for violating the frequentist principle (SIST p. 186). *The paper is “frequentist probability and frequentist statistics”. Remember that links to readings outside SIST are at the Captains biblio on the top left of the blog. Share your thoughts in the comments.

Some snapshots from Excursion 3 tour II.

Excursion 3 Tour II: It’s The Methods, Stupid

Tour II disentangles a jungle of conceptual issues at the heart of today’s statistics wars. The first stop (3.4) unearths the basis for a number of howlers and chestnuts thought to be licensed by Fisherian or N-P tests.** In each exhibit, we study the basis for the joke.  Together, they show: the need for an adequate test statistic, the difference between implicationary (i assumptions) and actual assumptions, and the fact that tail areas serve to raise, and not lower, the bar for rejecting a null hypothesis. (Additional howlers occur in Excursion 3 Tour III)

recommended: medium to heavy shovel 

Stop (3.5) pulls back the curtain on the view that Fisher and N-P tests form an incompatible hybrid. Incompatibilist tribes retain caricatures of F & N-P tests, and rob each from notions they need (e.g., power and alternatives for F, P-values & post-data error probabilities for N-P). Those who allege that Fisherian P-values are not error probabilities often mean simply that Fisher wanted an evidential not a performance interpretation. This is a philosophical not a mathematical claim. N-P and Fisher tended to use P-values in both ways. It’s time to get beyond incompatibilism. Even if we couldn’t point to quotes and applications that break out of the strict “evidential versus behavioral” split, we should be the ones to interpret the methods for inference, and supply the statistical philosophy that directs their right use.” (p. 181)

strongly recommended: light to medium shovel, thick-skinned jacket

In (3.6) we slip into the jungle. Critics argue that P-values are for evidence, unlike error probabilities, but then aver P-values aren’t good measures of evidence either, since they disagree with probabilist measures: likelihood ratios, Bayes Factors or posteriors. A famous peace-treaty between Fisher, Jeffreys & Bayes promises a unification. A bit of magic ensues! The meaning of error probability changes into a type of Bayesian posterior probability. It’s then possible to say ordinary frequentist error probabilities (e.g., type I & II error probabilities) aren’t error probabilities. We get beyond this marshy swamp by introducing subscripts 1 and 2. Whatever you think of the two concepts, they are very different. This recognition suffices to get you out of quicksand.

required: easily removed shoes, stiff walking stick (review Souvenir M on day of departure)

**Several of these may be found in searching for “Saturday night comedy” on this blog. In SIST, however I trace out the basis for the jokes.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

selected key terms and ideas 

Howlers and chestnuts of statistical tests
armchair science
Jeffreys tail area criticism
Limb sawing logic
Two machines with different precisions
Weak conditionality principle (WCP)
Conditioning (see WCP)
Likelihood principle
Long run performance vs probabilism
Alphas and p’s
Fisher as behaviorist
Hypothetical long-runs
Freudian metaphor for significance tests
Pearson, on cases where there’s no repetition
Armour-piercing naval shell
Error probability1 and error probability 2
Incompatibilist philosophy (F and N-P must remain separate)
Test statistic requirements (p. 159)

Please share your questions, other key terms to add, and any typos you find, in the comments. Interested in joining us? Write to jemille6@vt.edu. I plan another group zoom soon.

Categories: 2025 leisurely cruise | Leave a comment

Modest replication probabilities of p-values–desirable, not regrettable: a note from Stephen Senn

.

You will often hear—especially in discussions about the “replication crisis”—that statistical significance tests exaggerate evidence. Significance testing, we hear, inflates effect sizes, inflates power, inflates the probability of a real effect, or inflates the probability of replication, and thereby misleads scientists.

If you look closely, you’ll find the charges are based on concepts and philosophical frameworks foreign to both Fisherian and Neyman–Pearson hypothesis testing. Nearly all have been discussed on this blog or in SIST (Mayo 2018), but new variations have cropped up. The emphasis that some are now placing on how biased selection effects invalidate error probabilities is welcome, but I say that the recommendations for reinterpreting quantities such as p-values and power introduce radical distortions of error statistical inferences. Before diving into the modern incarnations of the charges it’s worth recalling Stephen Senn’s response to Stephen Goodman’s attempt to convert p-values into replication probabilities nearly 20 years ago (“A Comment on Replication, P-values and Evidence,” Statistics in Medicine). I first blogged it in 2012, here. Below I am pasting some excerpts from Senn’s letter (but readers interested in the topic should look at all of it), because Senn’s clarity cuts straight through many of today’s misunderstandings. 

.

Continue reading

Categories: 13 years ago, p-values exaggerate, replication research, S. Senn | Tags: , , , | 8 Comments

Blog at WordPress.com.