Monthly Archives: March 2013

possible progress on the comedy hour circuit?

Image of business woman rolling a giant stoneIt’s not April Fool’s Day yet, so I take it that Corey Yanofsky, one of the top 6 commentators on this blog, is serious in today’s exchange, despite claiming to be a Jaynesian (whatever that is). I dare not scratch too deep or look too close…along the lines of not looking a gift horse in the mouth, or however that goes. So here’s a not-too selective report from our exchange in the comments on my previous blogpost:

Mayo: You wrote:”I think I wrote something to the effect that your philosophy was the only one I have encountered that could possibly put frequentist procedures on a sound footing; I stand by that.” I’m curious as to why I deserve this honor ….

Corey: 
Mayo: It was always obvious no competent frequentist statistician would use a procedure criticized by the howlers; the problem was that I had never seen a compelling explanation why (beyond “that’s obviously stupid”). So you deserve the honor for putting forth a single principle from which error statistical procedures flow that refutes all of the howlers at once.

Mayo
: Corey: Wow, that’s a big concession even coupled with your remaining doubts….maybe I should highlight this portion of our exchange for our patient readers, looking for any sign of progress…

Corey:
 Mayo: Feel free to highlight it. I will point out that this “concession” shouldn’t be news to you: in an email I sent you on September 11, 2012, I wrote, ‘I now appreciate how the severity-based approach fully addresses all the typical criticisms offered during “Bayesian comedy hour”. Now, when I encounter these canards in Bayesian writings, I feel chagrin that they are being propagated; I certainly shall not be repeating them myself.’

Mayo: Ok, so you get an Honorable Mention, especially as I’m always pushing this bolder, or maybe it’s a stone egg. It will be a miracle if any to-be-published Bayesian texts or new editions excise some of the howlers!

But I still don’t understand the hesitancy in coming over to the error statistical side….

Categories: Uncategorized | 42 Comments

Higgs analysis and statistical flukes (part 2)

imagesEveryone was excited when the Higgs boson results were reported on July 4, 2012 indicating evidence for a Higgs-like particle based on a “5 sigma observed effect”. The observed effect refers to the number of excess events of a given type that are “observed” in comparison to the number (or proportion) that would be expected from background alone, and not due to a Higgs particle. This continues my earlier post. This, too, is a rough outsider’s angle on one small aspect of the statistical inferences involved. (Doubtless there will be corrections.) But that, apart from being fascinated by it, is precisely why I have chosen to discuss it: we should be able to employ a general philosophy of inference to get an understanding of what is true about the controversial concepts we purport to illuminate, e.g., significance levels.

Following an official report from ATLAS, researchers define a “global signal strength” parameter “such that μ = 0 corresponds to the background only hypothesis and μ = 1 corresponds to the SM Higgs boson signal in addition to the background” (where SM is the Standard Model). The statistical test may be framed as a one-sided test, where the test statistic (which is actually a ratio) records differences in the positive direction, in standard deviation (sigma) units. Reports such as: Continue reading

Categories: P-values, statistical tests, Statistics | 33 Comments

Is NASA suspending public education and outreach?

nasa.07In connection to my last post on public communication of science, a reader sent me this.[i]

NASA Internal Memo: Guidance for Education and Public Outreach Activities Under Sequestration

Source: NASA Internal Memo: Guidance for Education and Public Outreach Activities Under Sequestration

Posted Friday, March 22, 2013

Subject: Guidance for Education and Public Outreach Activities Under Sequestration

As you know, we have taken the first steps in addressing the mandatory spending cuts called for in the Budget Control Act of 2011. The law mandates a series of indiscriminate and significant across-the-board spending reductions totaling $1.2 trillion over 10 years.

As a result, we are forced to implement a number of new cost-saving measures, policies, and reviews in order to minimize impacts to the mission-critical activities of the Agency. We have already provided new guidance regarding conferences, travel, and training that reflect the new fiscal reality in which we find ourselves. Some have asked for more specific guidance at it relates to public outreach and engagement activities. That guidance is provided below. Continue reading

Categories: science communication | 2 Comments

Telling the public why the Higgs particle matters

UnknownThere’s been some controversy in the past two days regarding public comments made about the importance of the Higgs. Professor Matt Strassler, on his blog, “Of Particular Significance,” expresses a bit of outrage:

“Why, Professor Kaku? Why?”

Posted on March 19, 2013 | 70 Comments

Professor Michio Kaku, of City College (part of the City University of New York), is well-known for his work on string theory in the 1960s and 1970s, and best known today for his outreach efforts through his books and his appearances on radio and television.  His most recent appearance was a couple of days ago, in an interview on CBS television, which made its way into this CBS news article about the importance of the Higgs particle.

Unfortunately, what that CBS news article says about “why the Higgs particle matters” is completely wrong.  Why?  Because it’s based on what Professor Kaku said about the Higgs particle, and what he said is wrong.  Worse, he presumably knew that it was wrong.  (If he didn’t, that’s also pretty bad.) It seems that Professor Kaku feels it necessary, in order to engage the imagination of the public, to make spectacular distortions of the physics behind the Higgs field and the Higgs particle, even to the point of suggesting the Higgs particle triggered the Big Bang. Continue reading

Categories: science communication | Leave a comment

Update on Higgs data analysis: statistical flukes (part 1)

physics pic yellow particle burst blue coneI am always impressed at how researchers flout the popular philosophical conception of scientists as being happy as clams when their theories are ‘born out’ by data, while terribly dismayed to find any anomalies that might demand “revolutionary science” (as Kuhn famously called it). Scientists, says Kuhn, are really only trained to do “normal science”—science within a paradigm of hard core theories that are almost never, ever to be questioned.[i] It is rather the opposite, and the reports out last week updating the Higgs data analysis reflect this yen to uncover radical anomalies from which scientists can push the boundaries of knowledge. While it is welcome news that the new data do not invalidate the earlier inference of a Higgs-like particle, many scientists are somewhat dismayed to learn that it appears to be quite in keeping with the Standard Model. In a March 15 article in National Geographic News:

Although a full picture of the Higgs boson has yet to emerge, some physicists have expressed disappointment that the new particle is so far behaving exactly as theory predicts. Continue reading

Categories: significance tests, Statistics | 30 Comments

Normal Deviate: Double Misunderstandings About p-values

Sisyphus

sisyphean task

I’m really glad to see that the Normal Deviate has posted about the error in taking the p-value as any kind of conditional probability. I consider the “second” misunderstanding to be the (indirect) culprit behind the “first”.

Double Misunderstandings About p-values

March 14, 2013 – 7:57 pm

It’s been said a million times and in a million places that a p-value is not the probability of H0  given the data.

But there is a different type of confusion about p-values. This issue arose in a discussion on Andrew’s blog.

Andrew criticizes the New York times for giving a poor description of the meaning of p-values. Of course, I agree with him that being precise about these things is important. But, in reading the comments on Andrew’s blog, it occurred to me that there is often a double misunderstanding.

First, let me say that I am neither defending nor criticizing p-values in this post. I am just going to point out that there are really two misunderstandings floating around. Continue reading

Categories: P-values | 3 Comments

Risk-Based Security: Knives and Axes

headlesstsaAfter a 6-week hiatus from flying, I’m back in the role of female opt-out[i] in a brand new Delta[ii] terminal with free internet and ipads[iii]. I heard last week that the TSA plans to allow small knives in carry-ons, for the first time since 9/11, as “part of an overall risk-based security approach”. But now it appears that flight attendants, pilot unions, a number of elected officials, and even federal air marshals are speaking out against the move, writing letters and petitions of opposition.

“The Flight Attendants Union Coalition, representing nearly 90,000 flight attendants, and the Coalition of Airline Pilots Associations, which represents 22,000 airline pilots, also oppose the rule change.”

Former flight attendant Tiffany Hawk is “stupefied” by the move, “especially since the process that turns checkpoints into maddening logjams — removing shoes, liquids and computers — remains unchanged,” she wrote in an opinion column for CNN. Link is here. Continue reading

Categories: evidence-based policy, Rejected Posts, Statistics | 17 Comments

S. Stanley Young: Scientific Integrity and Transparency

Stanley Young recently shared his summary testimony with me, and has agreed to my posting it.

YoungPhoto2008 S. Stanley Young, PhD
Assistant Director for Bioinformatics
National Institute of Statistical Sciences
Research Triangle Park, NC

One-page Summary Young
Testimony of Committee on Science, Space and Technology, 5 March 2013
Scientific Integrity and Transparency
S. Stanley Young, PhD, FASA, FAAAS

Integrity and transparency are two sides of the same coin. Transparency leads to integrity. Transparency means that study protocol, statistical analysis code and data sets used in papers supporting regulation by the EPA should be publicly available as quickly as possible and not just going forward. Some might think that peer review is enough to ensure the validity of claims made in scientific papers. Peer review only says that the work meets the common standards of the discipline and on the face of it, the claims are plausible, Feinstein, Science, 1988. Peer review is not enough. Continue reading

Categories: evidence-based policy, Statistics | 10 Comments

Blog Contents 2013 (Jan & Feb)

Error Statistics Philosophy BLOG:Table of Contents 2013 (Jan & Feb)metablog old fashion typewriter
Organized by  Nicole Jinn & Jean Miller 

January 2013

(1/2) Severity as a ‘Metastatistical’ Assessment
(1/4) Severity Calculator
(1/6) Guest post: Bad Pharma? (S. Senn)
(1/9) RCTs, skeptics, and evidence-based policy
(1/10) James M. Buchanan
(1/11) Aris Spanos: James M. Buchanan: a scholar, teacher and friend
(1/12) Error Statistics Blog: Table of Contents
(1/15) Ontology & Methodology: Second call for Abstracts, Papers Continue reading

Categories: Metablog | Leave a comment

Stephen Senn: Casting Stones

senncropped1Casting Stones, by Stephen Senn*

At the end of last year I received a strange email from the editor of the British Medical Journal(BMJ) appealing for  ‘evidence’ to persuade the UK parliament of the necessity of making sure that data for clinical trials conducted by the pharmaceutical industry are made readily available to all and sundry.  I don’t disagree with this aim. In fact in an article(1) I published over a dozen years ago I wrote ‘No sponsor who refuses to provide end-users with trial data deserves to sell drugs.’(P26)

However, the way in which the BMJ is choosing to collect evidence does not set a good example. It is one I hope that all scientists would disown and one of which even journalists should be ashamed.

The letter reads

“Dear Prof Senn,

We need your help to show the House of Commons Science and Technology Select Committee the true scale of the problem of missing clinical data by collating a list of examples. Continue reading

Categories: evidence-based policy, Statistics | 28 Comments

Big Data or Pig Data?

pig-bum-textI don’t know if my reading of this Orwellian* piece is in sync with what Rameez intended, but he thought it was fine for me to post it here. See what you think: 

“Big Data or Pig Data” (A fable on huge amounts of data and why we don’t need models) by Remeez Rahman, computer scientist: posted at Realm of the SCENSCI

 There was a pig who wanted to be a scientist. He was not interested in models. When asked how he planned on making sense of the world, the pig would say in a deep mysterious voice, “I don’t do models: the world is my model” and then with a twinkle in his eyes, look at his interlocutor smugly.

By his phrase, “I don’t do models, the world is my model”, he meant that the world’s data was enough for him, the pig scientist. The more the data, the more accurately the pig declared, he would be able to predict what might happen in the world. Continue reading

Categories: Statistics | 22 Comments

capitalizing on chance

Mayo playing the slots

DGM playing the slots

Hardly a day goes by where I do not come across an article on the problems for statistical inference based on fallaciously capitalizing on chance: high-powered computer searches and “big” data trolling offer rich hunting grounds out of which apparently impressive results may be “cherry-picked”:

When the hypotheses are tested on the same data that suggested them and when tests of significance are based on such data, then a spurious impression of validity may result. The computed level of significance may have almost no relation to the true level. . . . Suppose that twenty sets of differences have been examined, that one difference seems large enough to test and that this difference turns out to be “significant at the 5 percent level.” Does this mean that differences as large as the one tested would occur by chance only 5 percent of the time when the true difference is zero? The answer is no, because the difference tested has been selected from the twenty differences that were examined. The actual level of significance is not 5 percent, but 64 percent! (Selvin 1970, 104)[1]

…Oh wait -this is from a contributor to Morrison and Henkel way back in 1970! But there is one big contrast, I find, that makes current day reports so much more worrisome: critics of the Morrison and Henkel ilk clearly report that to ignore a variety of “selection effects” results in a fallacious computation of the actual significance level associated with a given inference; clear terminology is used to distinguish the “computed” or “nominal” significance level on the one hand, and the actual or warranted significance level on the other. Nowadays, writers make it much less clear that the fault lies with the fallacious use of significance tests and other error statistical methods. Instead, the tests are blamed for permitting or even encouraging such misuses. Criticisms to the effect that we should stop trying to teach these methods correctly have hardly helped. The situation is especially puzzling given the fact that these same statistical fallacies have trickled down to the public sphere, what with Ben Goldacre’s “Bad Pharma”, calls for “all trials” to be registered and reported, and the popular articles on the ills of ‘big data’: Continue reading

Categories: Error Statistics, Statistics | 19 Comments

Blog at WordPress.com.