Author Archives: Mayo

December leisurely cruise “It’s the Methods, Stupid!” Excursion 3 Tour II (3.4-3.6)

2024 Cruise

Welcome to the December leisurely cruise:
Wherever we are sailing, assume that it’s warm, warm, warm (not like today in NYC). This is an overview of our first set of readings for December from my Statistical Inference as Severe Testing: How to get beyond the statistics wars (CUP 2018): [SIST]–Excursion 3 Tour II. This leisurely cruise, participants know, is intended to take a whole month to cover one week of readings from my 2020 LSE Seminars, except for December and January which double up. 

What do you think of  “3.6 Hocus-Pocus: P-values Are Not Error probabilities, Are Not Even Frequentist”? This section refers to Jim Berger’s famous attempted unification of Jeffreys, Neyman and Fisher in 2003. The unification considers testing 2 simple hypotheses using a random sample from a Normal distribution, computing their two P-values, rejecting whichever gets a smaller P-value, and then computing its posterior probability, assuming each gets a prior of .5. This becomes what he calls the “Bayesian error probability” upon which he defines “the frequentist principle”. On Berger’s reading of an important paper* by Neyman (1977), Neyman criticized p-values for violating the frequentist principle (SIST p. 186). *The paper is “frequentist probability and frequentist statistics”. Remember that links to readings outside SIST are at the Captains biblio on the top left of the blog. Share your thoughts in the comments.

Some snapshots from Excursion 3 tour II.

Excursion 3 Tour II: It’s The Methods, Stupid

Tour II disentangles a jungle of conceptual issues at the heart of today’s statistics wars. The first stop (3.4) unearths the basis for a number of howlers and chestnuts thought to be licensed by Fisherian or N-P tests.** In each exhibit, we study the basis for the joke.  Together, they show: the need for an adequate test statistic, the difference between implicationary (i assumptions) and actual assumptions, and the fact that tail areas serve to raise, and not lower, the bar for rejecting a null hypothesis. (Additional howlers occur in Excursion 3 Tour III)

recommended: medium to heavy shovel 

Stop (3.5) pulls back the curtain on the view that Fisher and N-P tests form an incompatible hybrid. Incompatibilist tribes retain caricatures of F & N-P tests, and rob each from notions they need (e.g., power and alternatives for F, P-values & post-data error probabilities for N-P). Those who allege that Fisherian P-values are not error probabilities often mean simply that Fisher wanted an evidential not a performance interpretation. This is a philosophical not a mathematical claim. N-P and Fisher tended to use P-values in both ways. It’s time to get beyond incompatibilism. Even if we couldn’t point to quotes and applications that break out of the strict “evidential versus behavioral” split, we should be the ones to interpret the methods for inference, and supply the statistical philosophy that directs their right use.” (p. 181)

strongly recommended: light to medium shovel, thick-skinned jacket

In (3.6) we slip into the jungle. Critics argue that P-values are for evidence, unlike error probabilities, but then aver P-values aren’t good measures of evidence either, since they disagree with probabilist measures: likelihood ratios, Bayes Factors or posteriors. A famous peace-treaty between Fisher, Jeffreys & Bayes promises a unification. A bit of magic ensues! The meaning of error probability changes into a type of Bayesian posterior probability. It’s then possible to say ordinary frequentist error probabilities (e.g., type I & II error probabilities) aren’t error probabilities. We get beyond this marshy swamp by introducing subscripts 1 and 2. Whatever you think of the two concepts, they are very different. This recognition suffices to get you out of quicksand.

required: easily removed shoes, stiff walking stick (review Souvenir M on day of departure)

**Several of these may be found in searching for “Saturday night comedy” on this blog. In SIST, however I trace out the basis for the jokes.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

selected key terms and ideas 

Howlers and chestnuts of statistical tests
armchair science
Jeffreys tail area criticism
Limb sawing logic
Two machines with different precisions
Weak conditionality principle (WCP)
Conditioning (see WCP)
Likelihood principle
Long run performance vs probabilism
Alphas and p’s
Fisher as behaviorist
Hypothetical long-runs
Freudian metaphor for significance tests
Pearson, on cases where there’s no repetition
Armour-piercing naval shell
Error probability1 and error probability 2
Incompatibilist philosophy (F and N-P must remain separate)
Test statistic requirements (p. 159)

Please share your questions, other key terms to add, and any typos you find, in the comments. Interested in joining us? Write to jemille6@vt.edu. I plan another group zoom soon.

Categories: 2025 leisurely cruise | Leave a comment

Modest replication probabilities of p-values–desirable, not regrettable: a note from Stephen Senn

.

You will often hear—especially in discussions about the “replication crisis”—that statistical significance tests exaggerate evidence. Significance testing, we hear, inflates effect sizes, inflates power, inflates the probability of a real effect, or inflates the probability of replication, and thereby misleads scientists.

If you look closely, you’ll find the charges are based on concepts and philosophical frameworks foreign to both Fisherian and Neyman–Pearson hypothesis testing. Nearly all have been discussed on this blog or in SIST (Mayo 2018), but new variations have cropped up. The emphasis that some are now placing on how biased selection effects invalidate error probabilities is welcome, but I say that the recommendations for reinterpreting quantities such as p-values and power introduce radical distortions of error statistical inferences. Before diving into the modern incarnations of the charges it’s worth recalling Stephen Senn’s response to Stephen Goodman’s attempt to convert p-values into replication probabilities nearly 20 years ago (“A Comment on Replication, P-values and Evidence,” Statistics in Medicine). I first blogged it in 2012, here. Below I am pasting some excerpts from Senn’s letter (but readers interested in the topic should look at all of it), because Senn’s clarity cuts straight through many of today’s misunderstandings. 

.

Continue reading

Categories: 13 years ago, p-values exaggerate, replication research, S. Senn | Tags: , , , | 8 Comments

First Look at N-P Methods as Severe Tests: Water plant accident [Exhibit (i) from Excursion 3]

November Cruise

The example I use here to illustrate formal severity comes in for criticism  in a paper to which I reply in a 2025 BJPS paper linked to here. Use the comments for queries.

Exhibit (i) N-P Methods as Severe Tests: First Look (Water Plant Accident) 

There’s been an accident at a water plant where our ship is docked, and the cooling system had to be repaired.  It is meant to ensure that the mean temperature of discharged water stays below the temperature that threatens the ecosystem, perhaps not much beyond 150 degrees Fahrenheit. There were 100 water measurements taken at randomly selected times and the sample mean x computed, each with a known standard deviation σ = 10.  When the cooling system is effective, each measurement is like observing X ~ N(150, 102). Because of this variability, we expect different 100-fold water samples to lead to different values of X, but we can deduce its distribution. If each X ~N(μ = 150, 102) then X is also Normal with μ = 150, but the standard deviation of X is only σ/√n = 10/√100 = 1. So X ~ N(μ = 150, 1). Continue reading

Categories: 2025 leisurely cruise, severe tests, severity function, water plant accident | Leave a comment

Neyman-Pearson Tests: An Episode in Anglo-Polish Collaboration: (3.2)

Neyman & Pearson

November Cruise: 3.2

This third of November’s stops in the leisurely cruise of SIST aligns well with my recent BJPS paper Severe Testing: Error Statistics vs Bayes Factor Tests.  In tomorrow’s zoom, 11 am New York time, we’ll have an overview of the topics in SIST so far, as well as a discussion of this paper. (If you don’t have a link, and want one, write to me at error@vt.edu). 

3.2 N-P Tests: An Episode in Anglo-Polish Collaboration*

We proceed by setting up a specific hypothesis to test, Hin Neyman’s and my terminology, the null hypothesis in R. A. Fisher’s . . . in choosing the test, we take into account alternatives to Hwhich we believe possible or at any rate consider it most important to be on the look out for . . .Three steps in constructing the test may be defined: Continue reading

Categories: 2024 Leisurely Cruise, E.S. Pearson, Neyman, statistical tests | Leave a comment

Where Are Fisher, Neyman, Pearson in 1919? Opening of Excursion 3, snippets from 3.1

November Cruise

This second excerpt for November is really just the preface to 3.1. Remember, our abbreviated cruise this fall is based on my LSE Seminars in 2020, and since there are only 5, I had to cut. So those seminars skipped 3.1 on the eclipse tests of GTR. But I want to share snippets from 3.1 with current readers, along with reflections in the comments.

Excursion 3 Statistical Tests and Scientific Inference

Tour I Ingenious and Severe Tests

[T]he impressive thing about [the 1919 tests of Einstein’s theory of gravity] is the risk involved in a prediction of this kind. If observation shows that the predicted effect is definitely absent, then the theory is simply refuted.The theory is incompatible with certain possible results of observation – in fact with results which everybody before Einstein would have expected. This is quite different from the situation I have previously described, [where] . . . it was practically impossible to describe any human behavior that might not be claimed to be a verification of these [psychological] theories. (Popper 1962, p. 36)

Continue reading

Categories: 2025 leisurely cruise, SIST, Statistical Inference as Severe Testing | 2 Comments

November: The leisurely tour of SIST continues

2025 Cruise

We continue our leisurely tour of Statistical Inference as Severe Testing [SIST] (Mayo 2018, CUP) with Excursion 3. This is based on my 5 seminars at the London School of Economics in 2020; I include slides and video for those who are interested. (use the comments for questions) Continue reading

Categories: 2025 leisurely cruise, significance tests, Statistical Inference as Severe Testing | 1 Comment

Severity and Adversarial Collaborations (i)

.

In the 2025 November/December issue of American Scientist, a group of authors (Ceci, Clark, Jussim and Williams 2025) argue in “Teams of rivals” that “adversarial collaborations offer a rigorous way to resolve opposing scientific findings, inform key sociopolitical issues, and help repair trust in science”. With adversarial collaborations, a term coined by Daniel Kahneman (2003), teams of divergent scholars, interested in uncovering what is the case (rather than endlessly making their case) design appropriately stringent tests to understand–and perhaps even resolve–their disagreements. I am pleased to see that in describing such tests the authors allude to my notion of severe testing (Mayo 2018)*:

Severe testing is the related idea that the scientific community ought to accept a claim only after it surmounts rigorous tests designed to find its flaws, rather than tests optimally designed for confirmation. The strong motivation each side’s members will feel to severely test the other side’s predictions should inspire greater confidence in the collaboration’s eventual conclusions. (Ceci et al., 2025)

1. Why open science isn’t enough Continue reading

Categories: severity and adversarial collaborations | 5 Comments

Excursion 1 Tour I (3rd stop): The Current State of Play in Statistical Foundations: A View From a Hot-Air Balloon (1.3)

Third Stop

Readers: With this third stop we’ve covered Tour 1 of Excursion 1.  My slides from the first LSE meeting in 2020 which dealt with elements of Excursion 1 can be found at the end of this post. There’s also a video giving an overall intro to SIST, Excursion 1. It’s noteworthy to consider just how much things seem to have changed in just the past few years. Or have they? What would the view from the hot-air balloon look like now?  Share your thoughts in the comments.

ZOOM: I propose a zoom meeting for Sunday Nov. 15, Sunday, November 16 at 11 am or Friday, November 21 at 11am, New York time. (An equal # prefer Fri & Sun.) The link will be available to those who register/registered with Dr. Miller*.

The Current State of Play in Statistical Foundations: A View From a Hot-Air Balloon (1.3)

.

How can a discipline, central to science and to critical thinking, have two methodologies, two logics, two approaches that frequently give substantively different answers to the same problems? … Is complacency in the face of contradiction acceptable for a central discipline of science? (Donald Fraser 2011, p. 329)

We [statisticians] are not blameless … we have not made a concerted professional effort to provide the scientific world with a unified testing methodology. (J. Berger 2003, p. 4)

Continue reading

Categories: 2025 leisurely cruise, Statistical Inference as Severe Testing | Leave a comment

The ASA Sir David R. Cox Foundations of Statistics Award is now annual

15 July 1924 – 18 January 2022

The Sir David R. Cox Foundations of Statistics Award will now be given annually by the American Statistical Association (ASA), thanks to generous contributions by “Friends” of David Cox, solicited on this blog!*

Nominations for the 2026 Sir David R. Cox Foundations of Statistics Award are due on November 1, 2025 requiring the following:

  • Nomination letter
  • Candidate’s CV
  • Two letters of support, not to exceed two pages each

Continue reading

Categories: Sir David Cox, Sir David Cox Foundations of Statistics Award | Leave a comment

Excursion 1 Tour I (2nd Stop): Probabilism, Performance, and Probativeness (1.2)

.

Readers: Last year at this time I gave a Neyman seminar at Berkeley and posted on a panel discussion we had. There were lots of great questions, and follow-ups. Here’s a link.

“I shall be concerned with the foundations of the subject. But in case it should be thought that this means I am not here strongly concerned with practical applications, let me say right away that confusion about the foundations of the subject is responsible, in my opinion, for much of the misuse of the statistics that one meets in fields of application such as medicine, psychology, sociology, economics, and so forth”. (George Barnard 1985, p. 2)

While statistical science (as with other sciences) generally goes about its business without attending to its own foundations, implicit in every statistical methodology are core ideas that direct its principles, methods, and interpretations. I will call this its statistical philosophy. To tell what’s true about statistical inference, understanding the associated philosophy (or philosophies) is essential. Discussions of statistical foundations tend to focus on how to interpret probability, and much less on the overarching question of how probability ought to be used in inference. Assumptions about the latter lurk implicitly behind debates, but rarely get the limelight. If we put the spotlight on them, we see that there are two main philosophies about the roles of probability in statistical inference: We may dub them performance (in the long run) and probabilism. Continue reading

Categories: Error Statistics | Leave a comment

2025(1)The leisurely cruise begins: Excerpt from Excursion 1 Tour 1 of Statistical Inference as Severe Testing (SIST)

Ship Statinfasst

Excerpt from excursion 1 Tour I: Beyond Probabilism and Performance: Severity Requirement (1.1)

NOTE: The following is an excerpt from my book: Statistical Inference as Severe Testing: How to get beyond the statistics wars (CUP, 2018). For any new reflections or corrections, I will use the comments. The initial announcement is here (including how to join).

I’m talking about a specific, extra type of integrity that is [beyond] not lying, but bending over backwards to show how you’re maybe wrong, that you ought to have when acting as a scientist. (Feynman 1974/1985, p. 387)

It is easy to lie with statistics. Or so the cliché goes. It is also very difficult to uncover these lies without statistical methods – at least of the right kind. Self- correcting statistical methods are needed, and, with minimal technical fanfare, that’s what I aim to illuminate. Since Darrell Huff wrote How to Lie with Statistics in 1954, ways of lying with statistics are so well worn as to have emerged in reverberating slogans:

  • Association is not causation.
  • Statistical significance is not substantive significamce
  • No evidence of risk is not evidence of no risk.
  • If you torture the data enough, they will confess.

Continue reading

Categories: Statistical Inference as Severe Testing | Leave a comment

2025 Leisurely cruise through Statistical Inference as Severe Testing: First Announcement

Ship Statinfasst

We’re embarking on a leisurely cruise through the highlights of Statistical Inference as Severe Testing [SIST]: How to Get Beyond the Statistics Wars (CUP 2018) this fall (Oct-Jan), following the 5 seminars I led for a 2020 London School of Economics (LSE) Graduate Research Seminar. It had to be run online due to Covid (as were the workshops that followed). Unlike last fall, this time I will include some zoom meetings on the material, as well as new papers and topics of interest to attendees. In this relaxed (self-paced) journey, excursions that had been covered in a week, will be spread out over a month [i] and I’ll be posting abbreviated excerpts on this blog. Look for the posts marked with the picture of ship StatInfAsSt. [ii]  Continue reading

Categories: 2024 Leisurely Cruise, Announcement | Leave a comment

My BJPS paper: Severe Testing: Error Statistics versus Bayes Factor Tests

.

In my new paper, “Severe Testing: Error Statistics versus Bayes Factor Tests”, now out online at the The British Journal for the Philosophy of Science, I “propose that commonly used Bayes factor tests be supplemented with a post-data severity concept in the frequentist error statistical sense”. But how? I invite your thoughts on this and any aspect of the paper.* (You can read it here.)

I’m pasting down the abstract and the introduction. Continue reading

Categories: Bayesian/frequentist, Likelihood Principle, multiple testing | 4 Comments

Are We Listening? Part II of “Sennsible significance” Commentary on Senn’s Guest Post

.

This is Part II of my commentary on Stephen Senn’s guest post, Be Careful What You Wish For. In this follow-up, I take up two topics:

(1) A terminological point raised in the comments to Part I, and
(2) A broader concern about how a popular reform movement reinforces precisely the mistaken construal Senn warns against.

But first, a question—are we listening? Because what underlies what Senn is saying is subtle, and yet what’s at stake is quite important for today’s statistical controversies. It’s not just a matter of which of four common construals is most apt for the population effect we wish to have high power to detect.[1] As I hear Senn, he’s also flagging a misunderstanding that allows some statistical reformers to (wrongly) dictate what statistical significance testers “wish” for in the first place. Continue reading

Categories: clinical relevance, power, reforming the reformers, S. Senn | 5 Comments

“Sennsible significance” Commentary on Senn’s Guest Post (Part I)

.

Have the points in Stephen Senn’s guest post fully come across?  Responding to comments from diverse directions has given Senn a lot of work, for which I’m very grateful. But I say we should not leave off the topic just yet. I don’t think the core of Senn’s argument has gotten the attention it deserves. So, we’re not done yet.[0]

I will write my commentary in two parts, so please return for Part II. In Part I, I’ll attempt to give an overarching version of Senn’s warning (“Be careful what you wish for”) and  his main recommendation. He will tell me if he disagrees. All quotes are from his post. In Senn’s opening paragraph:

…Even if a hypothesis is rejected and the effect is assumed genuine, it does not mean it is important…many a distinguished commentator on clinical trials has confused the difference you would be happy to find with the difference you would not like to miss. The former is smaller than the latter. For reasons I have explained in this blog [reblogged here], you should use the latter for determining the sample size as part of a conventional power calculation.

Continue reading

Categories: clinical relevance, power, S. Senn | 6 Comments

Stephen Senn (guest post): “Relevant significance? Be careful what you wish for”

 

.

Stephen Senn

Consultant Statistician
Edinburgh

Relevant significance?

Be careful what you wish for

Despised and Rejected

Scarcely a good word can be had for statistical significance these days. We are admonished (as if we did not know) that just because a null hypothesis has been ‘rejected’ by some statistical test, it does not mean it is not true and thus it does not follow that significance implies a genuine effect of treatment. Continue reading

Categories: clinical relevance, power, S. Senn | 47 Comments

(Guest Post) Stephen Senn: “Delta Force: To what extent is clinical relevance relevant?” (reblog)

Stephen Senn

Senn

Errorstatistics.com has been extremely fortunate to have contributions by leading medical statistician, Stephen Senn, over many years. Recently, he provided me with a new post that I’m about to put up, but as it builds on an earlier post, I’ll reblog that one first. Following his new post, I’ll share some reflections on the issue.

Stephen Senn
Consultant Statistician
Edinburgh, Scotland

Delta Force
To what extent is clinical relevance relevant?

Inspiration
This note has been inspired by a Twitter exchange with respected scientist and famous blogger  David Colquhoun. He queried whether a treatment that had 2/3 of an effect that would be described as clinically relevant could be useful. I was surprised at the question, since I would regard it as being pretty obvious that it could but, on reflection, I realise that things that may seem obvious to some who have worked in drug development may not be obvious to others, and if they are not obvious to others are either in need of a defence or wrong. I don’t think I am wrong and this note is to explain my thinking on the subject. Continue reading

Categories: power, Statistics, Stephen Senn | 2 Comments

A recent “brown bag” I gave in Philo at Va Tech: “What is the Philosophy of Statistics? (and how I was drawn to it)”

.

I gave a talk last week as part of the VT Department of Philosophy’s “brown bag” series. Here’s the blurb:

What is the Philosophy of Statistics? (and how I was drawn to it)

I give an introductory discussion of two key philosophical controversies in statistics in relation to today’s “replication crisis” in science: the role of probability, and the nature of evidence, in error-prone inference. I begin with a simple principle: We don’t have evidence for a claim C if little, if anything, has been done that would have found C false (or specifically flawed), even if it is. Along the way, I sprinkle in some autobiographical reflections.

My slides are at the end of this post: Continue reading

Categories: 2 way street: Stat & Phil of Sci, phil/history of stat, significance tests, stopping rule | Leave a comment

Error statistics doesn’t blame for possible future crimes of QRPs (ii)

A seminal controversy in statistical inference is whether error probabilities associated with an inference method are evidentially relevant once the data are in hand. Frequentist error statisticians say yes; Bayesians say no. A “no” answer goes hand in hand with holding the Likelihood Principle (LP), which follows from inference by Bayes theorem. A “yes” answer violates the LP (also called the strong LP). The reason error probabilities drop out according to the LP is that it follows from the LP that all the evidence from the data is contained in the likelihood ratios (at least for inference within a statistical model). For the error statistician, likelihood ratios are merely measures of comparative fit, and omit crucial information about their reliability. A dramatic illustration of this disagreement involves optional stopping, and it’s the one to which Roderick Little turns in the chapter “Do you like the likelihood principle?” in his new book that I cite in my last post Continue reading

Categories: Likelihood Principle, Rod Little, stopping rule | 5 Comments

Roderick Little’s new book: Seminal Ideas and Controversies in Statistics

Around a year ago, Professor Rod Little asked me if I’d mind being on the cover of a book he was finishing along with Fisher, Neyman and some others (can you identify the others?). Mind?  The book is Seminal Ideas and Controversies in Statistics (Routledge, 2025), and it has been out about a month.  Little is the Richard D. Remington Distinguished University Professor of Biostatistics at the University of Michigan. Here’s the Preface:

Preface:

Statistics has developed as a field through seminal papers and fascinating  controversies. This book concerns a wide-ranging set of 15 statistical topics,  grouped into three sets:

Part I, Chapters 1–6. Philosophical approaches to statistical inference,

Part II, Chapters 7–12. Statistical methodology, and

Part III, Chapters 13–15. Topics on statistical design, focusing on the role  of randomization. Continue reading

Categories: Rod Little | 2 Comments

Blog at WordPress.com.