Higgs Discovery two years on (1: “Is particle physics bad science?”)

Higgs_cake-s

July 4, 2014 was the two year anniversary of the Higgs boson discovery. As the world was celebrating the “5 sigma!” announcement, and we were reading about the statistical aspects of this major accomplishment, I was aghast to be emailed a letter, purportedly instigated by Bayesian Dennis Lindley, through Tony O’Hagan (to the ISBA). Lindley, according to this letter, wanted to know:

“Are the particle physics community completely wedded to frequentist analysis?  If so, has anyone tried to explain what bad science that is?”

Fairly sure it was a joke, I posted it on my “Rejected Posts” blog for a bit until it checked out [1]. (See O’Hagan’s “Digest and Discussion”)

Then, as details of the statistical analysis trickled down to the media, the P-value police (Wasserman, see (2)) came out in full force to examine if reports by journalists and scientists could in any way or stretch of the imagination be seen to have misinterpreted the sigma levels as posterior probability assignments to the various models and claims. The HEP (High Energy Physics) community had been painstaking in their communication of the results, but the P-bashers insisted on transforming the intended conditional….(I’ll come back to this.)

As for the HEP researchers, a central interest now is to explore any and all leads in the data that would point to physics beyond the Standard Model (BSM). The Higgs is just coming out to be too “perfectly plain vanilla,” and they’ve been unable to reject an SM null for years (3) (more on this later). So on this two-year anniversary, I’ll reblog a few of the Higgs posts, with some updated remarks—beginning with the first one below.

“Is Particle Physics Bad Science?” reblog July 11, 2012 

physics pic yellow particle burst blue coneI suppose[ed] this was somewhat of a joke from the ISBA, prompted by Dennis Lindley, but as I [now] accord the actual extent of jokiness to be only ~10%, I’m sharing it on the blog [i].  Lindley (according to O’Hagan) wonders why scientists require so high a level of statistical significance before claiming to have evidence of a Higgs boson.  It is asked: “Are the particle physics community completely wedded to frequentist analysis?  If so, has anyone tried to explain what bad science that is?”

Bad science?   I’d really like to understand what these representatives from the ISBA would recommend, if there is even a shred of seriousness here (or is Lindley just peeved that significance levels are getting so much press in connection with so important a discovery in particle physics?)

Well, read the letter and see what you think.

On Jul 10, 2012, at 9:46 PM, ISBA Webmaster wrote:

Dear Bayesians,

A question from Dennis Lindley prompts me to consult this list in search of answers.

We’ve heard a lot about the Higgs boson.  The news reports say that the LHC needed convincing evidence before they would announce that a particle had been found that looks like (in the sense of having some of the right characteristics of) the elusive Higgs boson.  Specifically, the news referred to a confidence interval with 5-sigma limits.

Now this appears to correspond to a frequentist significance test with an extreme significance level.  Five standard deviations, assuming normality, means a p-value of around 0.0000005.  A number of questions spring to mind.

1.  Why such an extreme evidence requirement?  We know from a Bayesian  perspective that this only makes sense if (a) the existence of the Higgs  boson (or some other particle sharing some of its properties) has extremely small prior probability and/or (b) the consequences of erroneously announcing its discovery are dire in the extreme.  Neither seems to be the case, so why  5-sigma?

2.  Rather than ad hoc justification of a p-value, it is of course better to do a proper Bayesian analysis.  Are the particle physics community completely wedded to frequentist analysis?  If so, has anyone tried to explain what bad science that is?


3.  We know that given enough data it is nearly always possible for a  significance test to reject the null hypothesis at arbitrarily low p-values,  simply because the parameter will never be exactly equal to its null value.   And apparently the LNC has accumulated a very large quantity of data.  So could even this extreme p-value be illusory?

If anyone has any answers to these or related questions, I’d be interested to  know and will be sure to pass them on to Dennis.

Regards,

Tony

—-
Professor A O’Hagan      
Email: a.ohagan@sheffield.ac.uk
Department of Probability and Statistics
University of Sheffield       

So given that the Higgs boson does not have such an extremely small prior probability, a proper Bayesian analysis would have enabled evidence of the Higgs long before attaining such an “extreme evidence requirement”. Why has no one tried to explain to these scientists how with just a little Bayesian analysis, they might have been done in last year or years ago? I take it the Bayesian would also enjoy the simplicity and freedom of not having to adjust “the Look Elsewhere Effect” (LEE[ii])

Let’s see if there’s a serious follow-up.[iii]

[i] bringing it down from my “Msc Kvetching page” where I’d put it last night.

[ii] For a discussion of how the error statistical philosophy avoids the classic criticisms of significance tests, see Mayo & Spanos (2011) ERROR STATISTICS. Other articles may be found on the link to my publication page.

[iii] O’Hagan informed me of several replies to his letter at the following:: http://bayesian.org/forums/news/3648

*****************************************************

(1) There’s scarce need for my “Rejected Posts” blog now that renegade thoughts can go on “twitter” (@learnfromerror), but I’ll keep it around for later.

(2) The Higgs Boson and the p-value Police:  http://normaldeviate.wordpress.com/2012/07/11/the-higgs-boson-and-the-p-value-police/

(3)The logic in this case is especially interesting. Each failure to reject the nulls of this type inform about the variant of BSM ruled out. (I’ll check with Robert Cousins that I’ve put this correctly. Update: He says that I have.) Here’s a link to Cousins’ recent paper on the Higgs and foundations of statistics http://arxiv.org/abs/1310.3791.

 

 

Categories: Bayesian/frequentist, fallacy of non-significance, Higgs, Lindley, Statistics | Tags: , , , , ,

Post navigation

4 thoughts on “Higgs Discovery two years on (1: “Is particle physics bad science?”)

  1. West

    At this point, grousing about why certain physics subdiscipline use a particular statistical methodology is a waste of ones breath. Partisan speeches and clever toy models aren’t likely to convince any collaboration to change, particularly those with ingrained traditions and extensive bureaucracies.

    Yes I would prefer to see HPD credible regions for parameter estimates over confidence intervals, but oh well. More applications of Bayesian methods to HEPP problems is undoubtedly a good thing in my mind. But when reading new papers these days, I try to resist the temptation to whine “why didn’t they use my preferred analysis method?” and check whether the authors are correctly applying the rules of their own chosen paradigm. Well at least that’s what I do at first, for I am only human.

    • West: I have no reason to think that “more applications of Bayesian methods to HEPP problems is undoubtedly a good thing” in the least. Nor did O’Hagan-Lindley give reasons other than opportunistic ones.

  2. West

    Mayo: I do not advocate for applying Bayesian methods to HEPP problems as a blind partisan but as a practitioner who has studied similar problems in astronomy. This isn’t to suggest the large library of existing analysis tools should be jettisoned, but that adding news one to the box can be really helpful.

    O’Hagan admits he was being deliberately provocative, as the nature of his questions makes painfully clear. My personal response upon reading the message was Liz Lemon-esque eye-roll. Negative arguments against an existing protocol don’t work, especially when its been successfully used in that discipline.

    But one successful analysis paradigm shouldn’t preclude the use of others, particularly when there is no empirical evidence that alternatives won’t work. The best arguments for trying a Bayesian analysis on their own terms and not relative to anything else come from successful applications to similar problems.

    • West: The Lindley-O”hagan letter wasn’t advocating the addition of new ones but rather than the entire enterprise involve eliciting priors for every possible effect and parameter before getting started. His regret was all the work that stat could have been getting. (That differs from how some conventional Bayesians have pushed their methods on HEPP–mostly harmless modeling tools I suppose.)
      Are you, or is anyone else, aware of sessions on Higgs statistics at major statistics meetings? I saw none at the last JSM.

Blog at WordPress.com.