July 4, 2014 was the two year anniversary of the Higgs boson discovery. As the world was celebrating the “5 sigma!” announcement, and we were reading about the statistical aspects of this major accomplishment, I was aghast to be emailed a letter, purportedly instigated by Bayesian Dennis Lindley, through Tony O’Hagan (to the ISBA). Lindley, according to this letter, wanted to know:
“Are the particle physics community completely wedded to frequentist analysis? If so, has anyone tried to explain what bad science that is?”
Fairly sure it was a joke, I posted it on my “Rejected Posts” blog for a bit until it checked out . (See O’Hagan’s “Digest and Discussion”)
Then, as details of the statistical analysis trickled down to the media, the P-value police (Wasserman, see (2)) came out in full force to examine if reports by journalists and scientists could in any way or stretch of the imagination be seen to have misinterpreted the sigma levels as posterior probability assignments to the various models and claims. The HEP (High Energy Physics) community had been painstaking in their communication of the results, but the P-bashers insisted on transforming the intended conditional….(I’ll come back to this.)
As for the HEP researchers, a central interest now is to explore any and all leads in the data that would point to physics beyond the Standard Model (BSM). The Higgs is just coming out to be too “perfectly plain vanilla,” and they’ve been unable to reject an SM null for years (3) (more on this later). So on this two-year anniversary, I’ll reblog a few of the Higgs posts, with some updated remarks—beginning with the first one below.
“Is Particle Physics Bad Science?” reblog July 11, 2012
I suppose[ed] this was somewhat of a joke from the ISBA, prompted by Dennis Lindley, but as I [now] accord the actual extent of jokiness to be only ~10%, I’m sharing it on the blog [i]. Lindley (according to O’Hagan) wonders why scientists require so high a level of statistical significance before claiming to have evidence of a Higgs boson. It is asked: “Are the particle physics community completely wedded to frequentist analysis? If so, has anyone tried to explain what bad science that is?”
Bad science? I’d really like to understand what these representatives from the ISBA would recommend, if there is even a shred of seriousness here (or is Lindley just peeved that significance levels are getting so much press in connection with so important a discovery in particle physics?)
Well, read the letter and see what you think.
On Jul 10, 2012, at 9:46 PM, ISBA Webmaster wrote:
A question from Dennis Lindley prompts me to consult this list in search of answers.
We’ve heard a lot about the Higgs boson. The news reports say that the LHC needed convincing evidence before they would announce that a particle had been found that looks like (in the sense of having some of the right characteristics of) the elusive Higgs boson. Specifically, the news referred to a confidence interval with 5-sigma limits.
Now this appears to correspond to a frequentist significance test with an extreme significance level. Five standard deviations, assuming normality, means a p-value of around 0.0000005. A number of questions spring to mind.
1. Why such an extreme evidence requirement? We know from a Bayesian perspective that this only makes sense if (a) the existence of the Higgs boson (or some other particle sharing some of its properties) has extremely small prior probability and/or (b) the consequences of erroneously announcing its discovery are dire in the extreme. Neither seems to be the case, so why 5-sigma?
2. Rather than ad hoc justification of a p-value, it is of course better to do a proper Bayesian analysis. Are the particle physics community completely wedded to frequentist analysis? If so, has anyone tried to explain what bad science that is?
3. We know that given enough data it is nearly always possible for a significance test to reject the null hypothesis at arbitrarily low p-values, simply because the parameter will never be exactly equal to its null value. And apparently the LNC has accumulated a very large quantity of data. So could even this extreme p-value be illusory?
If anyone has any answers to these or related questions, I’d be interested to know and will be sure to pass them on to Dennis.
Professor A O’Hagan
Department of Probability and Statistics
University of Sheffield
So given that the Higgs boson does not have such an extremely small prior probability, a proper Bayesian analysis would have enabled evidence of the Higgs long before attaining such an “extreme evidence requirement”. Why has no one tried to explain to these scientists how with just a little Bayesian analysis, they might have been done
in last year or years ago? I take it the Bayesian would also enjoy the simplicity and freedom of not having to adjust “the Look Elsewhere Effect” (LEE[ii])
Let’s see if there’s a serious follow-up.[iii]
[i] bringing it down from my “Msc Kvetching page” where I’d put it last night.
[ii] For a discussion of how the error statistical philosophy avoids the classic criticisms of significance tests, see Mayo & Spanos (2011) ERROR STATISTICS. Other articles may be found on the link to my publication page.
[iii] O’Hagan informed me of several replies to his letter at the following:: http://bayesian.org/forums/news/3648
(1) There’s scarce need for my “Rejected Posts” blog now that renegade thoughts can go on “twitter” (@learnfromerror), but I’ll keep it around for later.
(2) The Higgs Boson and the p-value Police: http://normaldeviate.wordpress.com/2012/07/11/the-higgs-boson-and-the-p-value-police/
At this point, grousing about why certain physics subdiscipline use a particular statistical methodology is a waste of ones breath. Partisan speeches and clever toy models aren’t likely to convince any collaboration to change, particularly those with ingrained traditions and extensive bureaucracies.
Yes I would prefer to see HPD credible regions for parameter estimates over confidence intervals, but oh well. More applications of Bayesian methods to HEPP problems is undoubtedly a good thing in my mind. But when reading new papers these days, I try to resist the temptation to whine “why didn’t they use my preferred analysis method?” and check whether the authors are correctly applying the rules of their own chosen paradigm. Well at least that’s what I do at first, for I am only human.