Contemporary Philosophy of Statistics (Summer Seminars)
SUMMER 2012 Times dates: 3-5:00 pm on June 6 & 13, Room T206
SEMINAR 1: (Slides)(Wednesday 6th June 3-5pm T206)
(1) For the first seminar, to get an overview of contemporary error-statistical-Bayesian issues, I propose reading a paper from my conference at the LSE in June 2010: Statistical Science and Philosophy of Science: Where Do/Should They Meet? Statistical Science and Philosophy of Science: Where Do/Should They Meet in 2011 (and Beyond)? RMM Vol. 2, 2011, 79–102. (This is actually part 1, part 2 for a later seminar.)
You might also enjoy reading the conversation between Sir David Cox and I: Statistical Scientist Meets a Philosopher of Science: A Conversation* RMM Vol. 2, 2011, 103–114 (Oct 18, 2011).
(2) There is a paper from 2004 that, taken as a whole (i.e., including the response to some criticisms) gives a non-technical overview of some key issues on the error statistical side: See also post from 5/24/12
- Mayo, D. (2004). “An Error-Statistical Philosophy of Evidence” in M. Taper and S. Lele (eds.) The Nature of Scientific Evidence: Statistical, Philosophical and Empirical Considerations. Chicago: University of Chicago Press: 79-118. (This version should be quite readable, and the source readily available.)
(3) Our splitting the seminars in this fashion gives you an ideal opportunity to use the interim period (assuming people are interested), or parts of it, on the rudiments of statistical methods. I’m sketching a set of statistical topics/examples together with a couple of statistics workbooks (on the order of the quite good, “The Cartoon Guide to Statistics” (Gonick) that could be the basis for July-Oct. exercises (at least for those who wish to acquire the background). We can decide later.
SEMINAR #2: June 13, 2012
Surveying the interests of the graduate student participants to the seminar, I propose to consider some of the philosophy of statistics issues we plan to cover but from the perspective of evidence-based policy, where the evidence is statistical. Please share thoughts and questions, both for this and the last seminar, on the blog page
or to me directly: email@example.com
Separating/connecting evidential considerations and policy values? Separating/connecting decision and evidential criteria
Interpreting results that fail to reject null hypotheses of no risk: power and severity: some fallacies and how to avoid them
Debates about the evidence (in evidence relevant for regulation and policy) are often intermingled with foundational disagreements both within and between philosophies of statistics, quite apart from policy choices.
e.g., the same data that would lead a significance tester to infer evidence of risk, may lead to infer evidence of no or low risk by default Bayesians
Even without taking sides, I argue, we can still compare the standards evidence employed in particular cases.
Material for Seminar #2:Philosophy of Statistics and Evidence Relevant for Regulating Risks: Objectivity and Dirty Hands, Fallacies of Insignificant Results
Pp 106-115, with the example from 114-15:
Mayo, D. (2004). “An Error-Statistical Philosophy of Evidence” in M. Taper and S. Lele (eds.) The Nature of Scientific Evidence: Statistical, Philosophical and Empirical Considerations. Chicago: University of Chicago Press: 79-118. (This version should be quite readable, and the source readily available.)
For background on objectivity, evidence relevant for regulation, negative results, instead of referring you to another paper, here are some short blogposts.
Formaldehyde Hearing: How to Tell the Truth With Statistically Insignificant Results
One of the first examples I came across of problems in construing statistically insignificant (or “negative”) results was a House Science and Technology investigation of an EPA ruling on formaldehyde in the 1980’s. Investigators of the EPA (led by Senator Al Gore!) used rather straightforward, day-to-day reasoning: No evidence of risk is not evidence of […]
Objectivity #2: The “Dirty Hands” Argument for Ethics in Evidence
Some argue that generating and interpreting data for purposes of risk assessment invariably introduces ethical (and other value) considerations that might not only go beyond, but might even conflict with, the “accepted canons of objective scientific reporting.” This thesis, we may call it the thesis of ethics in evidence and inference, some think, shows that […]
Objectivity #3: Clean(er) Hands With Metastatistics
I claim that all but the first of the “dirty hands” argument’s five premises are flawed. Even the first premise too directly identifies a policy decision with a statistical report. But the key flaws begin with premise 2. Although risk policies may be based on a statistical report of evidence, it does not follow that […]
Objectivity (#4) and the “Argument From Discretion”
We constantly hear that procedures of inference are inescapably subjective because of the latitude of human judgment as it bears on the collection, modeling, and interpretation of data. But this is seriously equivocal: Being the product of a human subject is hardly the same as being subjective, at least not in the sense we are […]
Categories: evidence based policy, objectivity | Tags: dirty hands argument | 28 Comments
Objectivity (#5): Three Reactions to the Challenge of Objectivity (in inference):
(1) If discretionary judgments are thought to introduce subjectivity in inference, a classic strategy thought to achieve objectivity is to extricate such choices, replacing them with purely formal a priori computations or agreed-upon conventions (see March 14). If leeway for discretion introduces subjectivity, then cutting off discretion must yield objectivity! Or so some argue. Such […]
Categories: Bayesianism, objectivity | Tags: Kyburg, Savage |
Interpreting Negative Results:
Neyman’s Nursery 2: Power and Severity [Continuation of Oct. 22 Post]:
Let me pick up where I left off in “Neyman’s Nursery,” [built to house Giere’s statistical papers-in-exile]. The main goal of the discussion is to get us to exercise correctly our “will to understand power”, if only little by little. One of the two surprising papers I came across the night our house was hit […]
Neyman’s Nursery (NN) 3: SHPOWER vs POWER
EGEK weighs 1 pound. Before leaving base again, I have a rule to check on weight gain since the start of my last trip. I put this off til the last minute, especially when, like this time, I know I’ve overeaten while traveling. The most accurate of the 4 scales I generally use (one is […]
Categories: Power, power analytic reasoning | Tags: shpower |
Neyman’s Nursery (NN5): Final Post
I want to complete the Neyman’s Nursery (NN) meanderings while we have some numbers before us, and while there is a particular example, test T+, on the table. Despite my warm and affectionate welcoming of the “power analytic” reasoning I unearthed in those “hidden Neyman” papers (see post from Oct. 22)– admittedly, largely lost in […]
Anything Tests Can do, CIs do Better; CIs Do Anything Better than Tests?* (reforming the reformers cont.)
*The title is to be sung to the tune of “Anything You Can Do I Can Do Better” from one of my favorite plays, Annie Get Your Gun (‘you’ being replaced by ‘test’). This post may be seen to continue the discussion in May 17 post on Reforming the Reformers. Consider again our one-sided Normal […]
Categories: error statistical philosophy |
Objectivity 1: Will the Real Junk Science Please Stand Up?
Posted on October 10, 2011 by Mayo Edit
Have you ever noticed in wranglings over evidence-based policy that it’s always one side that’s politicizing the evidence—the side whose policy one doesn’t like? The evidence on the near side, or your side, however, is solid science. Let’s call those who first coined the term “junk science” Group 1. For Group 1, junk science is […]
Categories: evidence based policy, risk assessment | Tags:David Michaels, evidence based policy, Evidence-based medicine, Junk science, risk assessment |
Interpreting positive results:
Part 1: Imaginary scientist at an imaginary company, Prionvac, and an imaginary Reformer
Posted on September 29, 2011 by Mayo Edit
Prionvac: Our experiments yield a statistically significant increase in survival among scrapie-infected mice who are given our new vaccine (p = .01) compared to infected mice who are treated with a placebo. The data indicate H: an increased survival time of 9 months, compared to untreated mice.*
Tags: imaginary conversationsstatistical significance
For some fun, check out my blog: errorstatistics.com ( You can scan topics, rooting out what’s of most interest to you. )
I have created a “page” on top for this seminar.