Live Exhibit: So what happens if you replace “p-values” with “Bayes Factors” in the 6 principles from the 2016 American Statistical Association (ASA) Statement on P-values? (Remove “or statistical significance” in question 5.)
Does the one positive assertion hold? Are the 5 “don’ts” true?
- P-values can indicate how incompatible the data are with a specified statistical model.
- P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
- Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
- Proper inference requires full reporting and transparency. p-values and related analyses should not be reported selectively. Conducting multiple analyses of the data and reporting only those with certain p-values (typically those passing a significance threshold) renders the reported p-values essentially uninterpretable.
- A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
- By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
I will hold off saying what I think until our Phil Stat forum (Phil Stat Wars and Their Casualties) on Thursday , although anyone who has read Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars [SIST] (CUP, 2019) will have a pretty good idea. You can read the relevant sections 4.5 and 4.6 in proof form. In SIST, I called examples “exhibits”, and examples the reader is invited to work through are called “live exhibits”. That’s because the whole book involves tours through statistical museums.
What do you think?
 For my general take on the meaning of the theme, see Statistical Crises and Their Casualties.
Selected blog posts on the 2016 ASA Statement on P-values & the Wasserstein et al. March 2019 supplement to The American Statistician 2019 editorial:
- March 7, 2016: “Don’t Throw Out the Error Control Baby With the Bad Statistics Bathwater”
- March 25, 2019: “Diary for Statistical War Correspondents on the Latest Ban on Speech.”
- June 17, 2019: “The 2019 ASA Guide to P-values and Statistical Significance: Don’t Say What You Don’t Mean” (Some Recommendations)(ii)
- July 19, 2019: The NEJM Issues New Guidelines on Statistical Reporting: Is the ASA P-Value Project Backfiring? (i)
- September 19, 2019: (Excerpts from) ‘P-Value Thresholds: Forfeit at Your Peril’ (free access). The article by Hardwicke and Ioannidis (2019), and the editorials by Gelman and by me are linked on this post. My article is P-value Thresholds: Forfeit at your Peril.
- November 4, 2019:On some Self-defeating aspects of the ASA’s 2019 recommendations of statistical significance tests
- November 14, 2019: The ASA’s P-value Project: Why it’s Doing More Harm than Good (cont from 11/4/19)
- November 30, 2019: P-Value Statements and Their Unintended(?) Consequences: The June 2019 ASA President’s Corner (b)
- Les Stats C’est Moi: We Take That Step Here!
Hi there… this won’t be the question I was talking about at the end of the zoom session, as I could answer that to myself.
Instead I just post the question I asked in the chat, which Mayo seemed to like, namely:
“If you want your prior to be information-less, why use an approach such as the Bayesian one that requires you to specify a prior in the first place?”