“Is the Philosophy of Probabilism an Obstacle to Statistical Fraud Busting?” was my presentation at the 2014 Boston Colloquium for the Philosophy of Science):“Revisiting the Foundations of Statistics in the Era of Big Data: Scaling Up to Meet the Challenge.”
As often happens, I never put these slides into a stand alone paper. But I have incorporated them into my book (in progress*), “How to Tell What’s True About Statistical Inference”. Background and slides were posted last year.
Slides (draft from Feb 21, 2014)
Download the 54th Annual Program
Cosponsored by the Department of Mathematics & Statistics at Boston University.
10 a.m. – 5:30 p.m.
Photonics Center, 9th Floor Colloquium Room (Rm 906)
8 St. Mary’s Street
p. 52-3 is the first time I’ve called the “we’ve tried (to get tests interpreted correctly)” group on their claim of really trying. Probabilists can’t help it because they don’t see how error probabilities serve the inferential goal of assessing “how well probed” rather than “how probabile”.
Thanks for the slides, Mayo! They are very clear.
1) My favourite line is: “It’s not so much replication but triangulation that’s required” (pg. 46). I think it is just about the perfect line on the issue.
2) I had a question about the following line that has been said a few times in different papers:
“We don’t need an exhaustive list of hypotheses to split off the problem of how well (or poorly) probed a given hypothesis is…” (pg. 25).
I was thinking about something like the Clever Hans effect or something. Where, the most reasonable solution uncovered later was one where the horse was just reacting to the interviewer’s/questioner’s body language. But, initially, this particular hypothesis wasn’t part of the list of possibilities entertained by the researchers, wouldn’t it accidentally have looked like the claim was initially severely tested, only to be later found out that they had missed out on a crucial counter-hypothesis. Essentially, missing out on a crucial counter-hypothesis seems to me to make any claims of the severe testing of a particular hypothesis incorrect. So, in some sense, doesn’t the severe testing viewpoint also require some acknowledgement of all the possible hypotheses, otherwise how could we say that the claim is severely tested? Am I missing something in my understanding of severity here?