straw person fallacy

5-year Review: The ASA’s P-value Project: Why it’s Doing More Harm than Good (cont from 11/4/19)

.

I continue my selective 5-year review of some of the posts revolving around the statistical significance test controversy from 2019. This post was first published on the blog on November 14, 2019. I feared then that many of the howlers of statistical significance tests would be further etched in granite after the ASA’s P-value project, and in many quarters this is, unfortunately, true. One that I’ve noticed quite a lot is the (false) supposition that negative results are uninformative. Some fields, notably psychology, keep to a version of simple Fisherian tests, ignoring Neyman-Pearson (N-P) tests (never minding that Jacob Cohen was a psychologist who gave us “power analysis”).  (See note [1]) For N-P, “it is immaterial which of the two alternatives…is labelled the hypothesis tested” (Neyman 1950, 259). Failing to find evidence of a genuine effect, coupled with a test’s having high capability to detect meaningful effects, warrants inferring the absence of meaningful effects. Even with the simple Fisherian test, failing to reject H0 is informative. Null results figure importantly throughout science, such as when the ether was falsified by Michelson-Morley, and in directing attention away from unproductive theory development.

Please share your comments on this blogpost. Continue reading

Categories: 5-year memory lane, statistical significance tests, straw person fallacy | 1 Comment

Why hasn’t the ASA Board revealed the recommendations of its new task force on statistical significance and replicability?

something’s not revealed

A little over a year ago, the board of the American Statistical Association (ASA) appointed a new Task Force on Statistical Significance and Replicability (under then president, Karen Kafadar), to provide it with recommendations. [Its members are here (i).] You might remember my blogpost at the time, “Les Stats C’est Moi”. The Task Force worked quickly, despite the pandemic, giving its recommendations to the ASA Board early, in time for the Joint Statistical Meetings at the end of July 2020. But the ASA hasn’t revealed the Task Force’s recommendations, and I just learned yesterday that it has no plans to do so*. A panel session I was in at the JSM, (P-values and ‘Statistical Significance’: Deconstructing the Arguments), grew out of this episode, and papers from the proceedings are now out. The introduction to my contribution gives you the background to my question, while revealing one of the recommendations (I only know of 2). Continue reading

Categories: 2016 ASA Statement on P-values, JSM 2020, replication crisis, statistical significance tests, straw person fallacy | 8 Comments

Blog at WordPress.com.