Given some slight recuperation delays, interested readers might wish to poke around the multiple layers of goodies on the left hand side of this web page, wherein all manner of foundational/statistical controversies are considered. In a recent attempt by Aris Spanos and I to address the age-old criticisms from the perspective of the “error statistical philosophy,” we delineate 13 criticisms. Here they are:
Ø (#1) error statistical tools forbid using any background knowledge.
Ø (#2) All statistically signiﬁcant results are treated the same.
Ø (#3) The p-value does not tell us how large a discrepancy is found.
Ø (#4) With large enough sample size even a trivially small discrepancy from the null can be detected.
Ø (#5) Whether there is a statistically signiﬁcant diﬀerence from the null depends on which is the null and which is the alternative.
Ø (#6) Statistically insigniﬁcant results are taken as evidence that the null hypothesis is true.
Ø (#7) Error probabilities are invariably misinterpreted as posterior probabilities.
Ø (#8) Error statistical tests are justiﬁed only in cases where there is a very long (if not inﬁnite) series of repetitions of the same experiment.
Ø (#9) Specifying statistical tests is too arbitrary.
Ø (#10) We should be doing conﬁdence interval estimation rather than signiﬁcance tests.
Ø (#11) Error statistical methods take into account the intentions of the scientists analyzing the data.
Ø (#12) All models are false anyway.
Ø (#13) Testing assumptions involves illicit data-mining.
HAVE WE LEFT ANY OUT?
(for problems accessing links, please write to: firstname.lastname@example.org)