Statistics and the Higgs Discovery: 9 yr Memory Lane

.

I’m reblogging two of my Higgs posts at the 9th anniversary of the 2012 discovery. (The first was in this post.) The following, was originally “Higgs Analysis and Statistical Flukes: part 2” (from March, 2013).[1]

Some people say to me: “severe testing is fine for ‘sexy science’ like in high energy physics (HEP)”–as if their statistical inferences are radically different. But I maintain that this is the mode by which data are used in “uncertain” reasoning across the entire landscape of science and day-to-day learning, at least, when we’re trying to find things out [2] Even with high level theories, the particular problems of learning from data are tackled piecemeal, in local inferences that afford error control. Granted, this statistical philosophy differs importantly from those that view the task as assigning comparative (or absolute) degrees-of-support/belief/plausibility to propositions, models, or theories.

The Higgs discussion finds its way into Tour III in Excursion 3 of my Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). You can read it (in proof form) here, pp. 202-217. in a section with the provocative title:

3.8 The Probability Our Results Are Statistical Fluctuations: Higgs’ Discovery

Continue reading

Categories: Higgs, highly probable vs highly probed, P-values | Leave a comment

Statisticians Rise Up To Defend (error statistical) Hypothesis Testing

.

What is the message conveyed when the board of a professional association X appoints a Task Force intended to dispel the supposition that a position advanced by the Executive Director of association X does not reflect the views of association X on a topic that members of X disagree on? What it says to me is that there is a serious break-down of communication amongst the leadership and membership of that association. So while I’m extremely glad that the ASA appointed the Task Force on Statistical Significance and Replicability in 2019, I’m very sorry that the main reason it was needed was to address concerns that an editorial put forward by the ASA Executive Director (and 2 others) “might be mistakenly interpreted as official ASA policy”. The 2021 Statement of the Task Force (Benjamini et al. 2021) explains:

In 2019 the President of the American Statistical Association (ASA) established a task force to address concerns that a 2019 editorial in The American Statistician (an ASA journal) might be mistakenly interpreted as official ASA policy. (The 2019 editorial recommended eliminating the use of “p < 0.05” and “statistically significant” in statistical analysis.) This document is the statement of the task force…

Continue reading

Categories: ASA Task Force on Significance and Replicability, Schachtman, significance tests | 10 Comments

June 24: “Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” (Katrin Hohl)

The tenth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

24 June 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

Katrin Hohl_copy

.

“Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” 

Katrin Hohl Continue reading

Categories: Error Statistics | Leave a comment

At long last! The ASA President’s Task Force Statement on Statistical Significance and Replicability

The ASA President’s Task Force Statement on Statistical Significance and Replicability has finally been published. It found a home in The Annals of Applied Statistics, after everyone else they looked to–including the ASA itself– refused to publish it.  For background see this post. I’ll comment on it in a later post. There is also an Editorial: Statistical Significance, P-Values, and Replicability by Karen Kafadar. Continue reading

Categories: ASA Task Force on Significance and Replicability | 11 Comments

June 24: “Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” (Katrin Hohl)

The tenth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

24 June 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

Katrin Hohl_copy

.

“Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” 

Katrin Hohl Continue reading

Categories: Error Statistics | Leave a comment

The F.D.A.’s controversial ruling on an Alzheimer’s drug (letter from a reader)(ii)

I was watching Biogen’s stock (BIIB) climb over 100 points yesterday because its Alzheimer’s drug, aducanumab [brand name: Aduhelm], received surprising FDA approval.  I hadn’t been following the drug at all (it’s enough to try and track some Covid treatments/vaccines). I knew only that the FDA panel had unanimously recommended not to approve it last year, and the general sentiment was that it was heading for FDA rejection yesterday. After I received an email from Geoff Stuart[i] asking what I thought, I found out a bit more. He wrote: Continue reading

Categories: PhilStat/Med, preregistration | 10 Comments

Bayesian philosophers vs Bayesian statisticians: Remarks on Jon Williamson

While I would agree that there are differences between Bayesian statisticians and Bayesian philosophers, those differences don’t line up with the ones drawn by Jon Williamson in his presentation to our Phil Stat Wars Forum (May 20 slides). I hope Bayesians (statisticians, or more generally, practitioners, and philosophers) will weigh in on this. 

Continue reading
Categories: Phil Stat Forum, stat wars and their casualties | 11 Comments

Mayo Casualties of O-Bayesianism and Williamson response

.

After Jon Williamson’s talk, Objective Bayesianism from a Philosophical Perspective, at the PhilStat forum on May 22, I raised some general “casualties” encountered by objective, non-subjective or default Bayesian accounts, not necessarily Williamson’s. I am pasting those remarks below, followed by some additional remarks and the video of his responses to my main kvetches. Continue reading

Categories: frequentist/Bayesian, objective Bayesians, Phil Stat Forum | 4 Comments

May 20: “Objective Bayesianism from a Philosophical Perspective” (Jon Williamson)

The ninth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

20 May 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

“Objective Bayesianism from a philosophical perspective” 

Jon Williamson Continue reading

Categories: Error Statistics | Tags: | Leave a comment

Tom Sterkenburg Reviews Mayo’s “Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars” (2018, CUP)

T. Sterkenburg

Tom Sterkenburg, PhD
Postdoctoral Fellow
Munich Center for Mathematical Philosophy
LMU Munich
Munich, German

Deborah G. Mayo: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars

The foundations of statistics is not a land of peace and quiet. “Tribal warfare” is perhaps putting it too strong, but it is the case that for decades now various camps and subcamps have been exchanging heated arguments about the right statistical methodology. That these skirmishes are not just an academic exercise is clear from the widespread use of statistical methods, and contemporary challenges that cry for more secure foundations: the rise of big data, the replication crisis.

Continue reading

Categories: SIST, Statistical Inference as Severe Testing–Review, Tom Sterkenburg | 9 Comments

CUNY zoom talk on Wednesday: Evidence as Passing a Severe Test

If interested, write to me for the zoom link (error@vt.edu).

Categories: Announcement | Leave a comment

April 22 “How an information metric could bring truce to the statistics wars” (Daniele Fanelli)

The eighth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

22 April 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

“How an information metric could bring truce to the statistics wars

Daniele Fanelli Continue reading

Categories: Phil Stat Forum, replication crisis, stat wars and their casualties | 1 Comment

A. Spanos: Jerzy Neyman and his Enduring Legacy (guest post)

I am reblogging a guest post that Aris Spanos wrote for this blog on Neyman’s birthday some years ago.   

A. Spanos

A Statistical Model as a Chance Mechanism
Aris Spanos 

Jerzy Neyman (April 16, 1894 – August 5, 1981), was a Polish/American statistician[i] who spent most of his professional career at the University of California, Berkeley. Neyman is best known in statistics for his pioneering contributions in framing the Neyman-Pearson (N-P) optimal theory of hypothesis testing and his theory of Confidence Intervals. (This article was first posted here.) Continue reading

Categories: Neyman, Spanos | Leave a comment

Happy Birthday Neyman: What was Neyman opposing when he opposed the ‘Inferential’ Probabilists?

.

Today is Jerzy Neyman’s birthday (April 16, 1894 – August 5, 1981). I’m posting a link to a quirky paper of his that explains one of the most misunderstood of his positions–what he was opposed to in opposing the “inferential theory”. The paper is Neyman, J. (1962), ‘Two Breakthroughs in the Theory of Statistical Decision Making‘ [i] It’s chock full of ideas and arguments. “In the present paper” he tells us, “the term ‘inferential theory’…will be used to describe the attempts to solve the Bayes’ problem with a reference to confidence, beliefs, etc., through some supplementation …either a substitute a priori distribution [exemplified by the so called principle of insufficient reason] or a new measure of uncertainty” such as Fisher’s fiducial probability. It arises on p. 391 of Excursion 5 Tour III of Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). Here’s a link to the proofs of that entire tour. If you hear Neyman rejecting “inferential accounts” you have to understand it in this very specific way: he’s rejecting “new measures of confidence or diffidence”. Here he alludes to them as “easy ways out”. He is not rejecting statistical inference in favor of behavioral performance as typically thought. Neyman always distinguished his error statistical performance conception from Bayesian and Fiducial probabilisms [ii]. The surprising twist here is semantical and the culprit is none other than…Allan Birnbaum. Yet Birnbaum gets short shrift, and no mention is made of our favorite “breakthrough” (or did I miss it?). You can find quite a lot on this blog searching Birnbaum. Continue reading

Categories: Bayesian/frequentist, Error Statistics, Neyman | 3 Comments

Intellectual conflicts of interest: Reviewers

.

Where do journal editors look to find someone to referee your manuscript (in the typical “double blind” review system in academic journals)? One obvious place to look is the reference list in your paper. After all, if you’ve cited them, they must know about the topic of your paper, putting them in a good position to write a useful review. The problem is that if your paper is on a topic of ardent disagreement, and you argue in favor of one side of the debates, then your reference list is likely to include those with actual or perceived conflicts of interest. After all, if someone has a strong standpoint on an issue of some controversy, and a strong interest in persuading others to accept their side, it creates an intellectual conflict of interest, if that person has power to uphold that view. Since your referee is in a position of significant power to do just that, it follows that they have a conflict of interest (COI). A lot of attention is paid to author’s conflicts of interest, but little into intellectual or ideological conflicts of interests of reviewers. At most, the concern is with the reviewer having special reasons to favor the author, usually thought to be indicated by having been a previous co-author. We’ve been talking about journal editors conflicts of interest as of late (e.g., with Mark Burgman’s presentation at the last Phil Stat Forum) and this brings to mind another one. Continue reading

Categories: conflicts of interest, journal referees | 12 Comments

ASA to Release the Recommendations of its Task Force on Statistical Significance and Replication

The American Statistical Association has announced that it has decided to reverse course and share the recommendations developed by the ASA Task Force on Statistical Significance and Replicability in one of its official channels. The ASA Board created this group [1] in November 2019 “with a charge to develop thoughtful principles and practices that the ASA can endorse and share with scientists and journal editors.” (AMSTATNEWS 1 February 2020). Some members of the ASA Board felt that its earlier decision not to make these recommendations public, but instead to leave the group to publish its recommendations on its own, might give the appearance of a conflict of interest between the obligation of the ASA to represent the wide variety of methodologies used by its members in widely diverse fields, and the advocacy by some members who believe practitioners should stop using the term “statistical significance” and end the practice of using p-value thresholds in interpreting data [the Wasserstein et al. (2019) editorial]. I think that deciding to publicly share the new Task Force recommendations is very welcome, given especially that the Task Force was appointed to avoid just such an apparent conflict of interest. Past ASA President, Karen Kafadar noted: Continue reading

Categories: conflicts of interest | 1 Comment

The Stat Wars and Intellectual conflicts of interest: Journal Editors

 

Like most wars, the Statistics Wars continues to have casualties. Some of the reforms thought to improve reliability and replication may actually create obstacles to methods known to improve on reliability and replication. At each one of our meeting of the Phil Stat Forum: “The Statistics Wars and Their Casualties,” I take 5 -10 minutes to draw out a proper subset of casualties associated with the topic of the presenter for the day. (The associated workshop that I have been organizing with Roman Frigg at the London School of Economics (CPNSS) now has a date for a hoped for in-person meeting in London: 24-25 September 2021.) Of course we’re interested not just in casualties but in positive contributions, though what counts as a casualty and what a contribution is itself a focus of philosophy of statistics battles.

Continue reading
Categories: Error Statistics | Leave a comment

Reminder: March 25 “How Should Applied Science Journal Editors Deal With Statistical Controversies?” (Mark Burgman)

The seventh meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

25 March, 2021

TIME: 15:00-16:45 (London); 11:00-12:45 (New York, NOTE TIME CHANGE TO MATCH UK TIME**)

For information about the Phil Stat Wars forum and how to join, click on this link.

How should applied science journal editors deal with statistical controversies?

Mark Burgman Continue reading

Categories: ASA Guide to P-values, confidence intervals and tests, P-values, significance tests | Tags: , | 1 Comment

Pandemic Nostalgia: The Corona Princess: Learning from a petri dish cruise (reblog 1yr)

.

Last week, giving a long postponed talk for the NY/NY Metro Area Philosophers of Science Group (MAPS), I mentioned how my book Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP) invites the reader to see themselves on a special interest cruise as we revisit old and new controversies in the philosophy of statistics–noting that I had no idea in writing the book that cruise ships would themselves become controversial in just a few years. The first thing I wrote during early pandemic days last March was this post on the Diamond Princess. The statistics gleaned from the ship remain important resources which haven’t been far off in many ways. I reblog it here. Continue reading

Categories: covid-19, memory lane | Leave a comment

March 25 “How Should Applied Science Journal Editors Deal With Statistical Controversies?” (Mark Burgman)

The seventh meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

25 March, 2021

TIME: 15:00-16:45 (London); 11:00-12:45 (New York, NOTE TIME CHANGE)

For information about the Phil Stat Wars forum and how to join, click on this link.

How should applied science journal editors deal with statistical controversies?

Mark Burgman Continue reading

Categories: ASA Guide to P-values, confidence intervals and tests, P-values, significance tests | Tags: , | 1 Comment

Blog at WordPress.com.