This short paper, together with the response to comments by Casella and McCoy, may provide an OK overview of some issues/ideas, and as I’m making it available for my upcoming PH500 seminar*, I thought I’d post it too. The paper itself was a 15-minute presentation at the Ecological Society of America in 1998; my response to criticisms, around the same length, was requested much later. While in some ways the time lag shows, e.g., McCoy’s reference to “reductionist” accounts–part of the popular constructive leanings of the time; scant mention of Bayesian developments taking place around then, it is simple and short and non-technical **. Also, as I should hope, my own views have gone considerably beyond what I wrote then.
(Taper and Lele did an excellent job with this volume, as long as it took, particularly interspersing the commentary. I recommend it!***)
Mayo, D. (2004). “An Error-Statistical Philosophy of Evidence” in M. Taper and S. Lele (eds.) The Nature of Scientific Evidence: Statistical, Philosophical and Empirical Considerations. Chicago: University of Chicago Press: 79-118 (with discussion).
Despite the widespread use of error-statistical methods in science, these methods have been the subject of enormous criticism, giving rise to the popular statistical reform” movement and bolstering subjective Bayesian philosophy of science. Given the new emphasis of philosophers of science on scientific practice, it is surprising to find they are rarely called upon to shed light on the large literature now arising from debates about these reforms—debates that are so often philosophical. I have long proposed reinterpreting standard statistical tests as tools for obtaining experimental knowledge. In my account of testing, data x are evidence for a hypothesis H to the extent that H passes a severe test with x. The familiar statistical hypotheses as I see them serve to ask questions about the presence of key errors: mistaking real effects for chance, or mistakes about parameter values, causes, and experimental assumptions. An experimental result is a good indication that an error is absent if there is a very high probability that the error would have been detected if it existed, and yet it was not detected. These results provide a good (poor) indication of a hypothesis H to the extent that H passes a test with high (low) severity. Tests with low error probabilities are justified by the corresponding reasoning for hypotheses that pass severe tests.
*PH500 Contemporary Philosophy of StatisticsAs a visitor of the Centre for Philosophy of Natural and Social Science (CPNSS) at the London School of Economics and Political Science, I am planning to lead 5 seminars in the department of Philosophy, Logic, and Scientific Method this summer (2) and autumn (3) on Contemporary Philosophy of Statistics under the PH500 rubric, (listed under summer term). This will be rather informal, based on the book I am writing with this name. There will be at least one guest seminar leader in the fall. Anyone interested in attending or finding out more may write to me: firstname.lastname@example.org .
Wednesday 6th June 3-5pm T206
Wednesday 13th June 3-5pm T206
Autumn term dates: To Be Announced
**I’ve heard it referred to as “Mayo Lite”.
***Never mind that some of the ecologists are or were somewhat under the spell of likelihoodist Richard Royall. Royall told me that he would have preferred having influence over a less messy field, but he got the ecologists (something like that). Personally, I rather like ecologists.