Tour I The Myth of “The Myth of Objectivity”*
Objectivity in statistics, as in science more generally, is a matter of both aims and methods. Objective science, in our view, aims to find out what is the case as regards aspects of the world [that hold] independently of our beliefs, biases and interests; thus objective methods aim for the critical control of inferences and hypotheses, constraining them by evidence and checks of error. (Cox and Mayo 2010, p. 276)
Whenever you come up against blanket slogans such as “no methods are objective” or “all methods are equally objective and subjective” it is a good guess that the problem is being trivialized into oblivion. Yes, there are judgments, disagreements, and values in any human activity, which alone makes it too trivial an observation to distinguish among very different ways that threats of bias and unwarranted inferences may be controlled. Is the objectivity–subjectivity distinction really toothless, as many will have you believe? I say no. I know it’s a meme promulgated by statistical high priests, but you agreed, did you not, to use a bit of chutzpah on this excursion? Besides, cavalier attitudes toward objectivity are at odds with even more widely endorsed grass roots movements to promote replication, reproducibility, and to come clean on a number of sources behind illicit results: multiple testing, cherry picking, failed assumptions, researcher latitude, publication bias and so on. The moves to take back science are rooted in the supposition that we can more objectively scrutinize results – even if it’s only to point out those that are BENT. The fact that these terms are used equivocally should not be taken as grounds to oust them but rather to engage in the difficult work of identifying what there is in “objectivity” that we won’t give up, and shouldn’t.
The Key Is Getting Pushback! While knowledge gaps leave plenty of room for biases, arbitrariness, and wishful thinking, we regularly come up against data that thwart our expectations and disagree with the predictions we try to foist upon the world. We get pushback! This supplies objective constraints on which our critical capacity is built. Our ability to recognize when data fail to match anticipations affords the opportunity to systematically improve our orientation. Explicit attention needs to be paid to communicating results to set the stage for others to check, debate, and extend the inferences reached. Which conclusions are likely to stand up? Where do the weakest parts remain? Don’t let anyone say you can’t hold them to an objective account.
Excursion 2, Tour II led us from a Popperian tribe to a workable demarcation for scientific inquiry. That will serve as our guide now for scrutinizing the myth of the myth of objectivity. First, good sciences put claims to the test of refutation, and must be able to embark on an inquiry to pin down the sources of any apparent effects. Second, refuted claims aren’t held on to in the face of anomalies and failed replications; they are treated as refuted in further work (at least provisionally); well-corroborated claims are used to build on theory or method: science is not just stamp collecting. The good scientist deliberately arranges inquiries so as to capitalize on pushback, on effects that will not go away, on strategies to get errors to ramify quickly and force us to pay attention to them. The ability to register how hunting, optional stopping, and cherry picking alter their error-probing capacities is a crucial part of a method’s objectivity. In statistical design, day-to-day tricks of the trade to combat bias are consciously amplified and made systematic. It is not because of a “disinterested stance” that we invent such methods; it is that we, quite competitively and self-interestedly, want our theories to succeed in the market place of ideas.
Admittedly, that desire won’t suffice to incentivize objective scrutiny if you can do just as well producing junk. Successful scrutiny is very different from success at grants, getting publications and honors. That is why the reward structure of science is so often blamed nowadays. New incentives, gold stars and badges for sharing data and for resisting the urge to cut corners are being adopted in some fields. Fortunately, for me, our travels will bypass lands of policy recommendations, where I have no special expertise. I will stop at the perimeters of scrutiny of methods which at least provide us citizen scientists armor against being misled. Still, if the allure of carrots has grown stronger than the sticks, we need stronger sticks.
Problems of objectivity in statistical inference are deeply intertwined with a jungle of philosophical problems, in particular with questions about what objectivity demands, and disagreements about “objective versus subjective” probability. On to the jungle!
*From Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (Mayo 2018, CUP)
Notes to the Reader of this Blog:
Many of the ideas on objectivity in Excursion 4 Tour I are distilled from posts and discussions on this blog. I’ve pasted some of those posts below, starting with a relatively recent one with the title of this Tour. Perusing the comments by readers is valuable in its own right. (You can find a list of all posts on this blog by searching “All She Wrote (so far)”
The Myth of “The Myth of Objectivity”
Objectivity #2: The ‘Dirty Hands’ Argument for Ethics in Evidence
Objectivity #3: Clean(er) Hands With Metastatistics
Objectivity (#4) and the “Argument From Discretion”
Objectivity in Statistics: “Arguments From Discretion and 3 Reactions”
As co-author of a paper in which the use of the term “objective” in statistics has been branded as “unhelpful” I enjoyed reading your objectivity chapter a lot. My issue with objectivity is not so much that I’d think that it were in fact (objectively;-)) a myth, but rather that all kinds of meanings are around for “objective” and the term is often used in manipulative ways, justified based on one definition, but consciously or unconsciously making an appeal to an audience that has in mind another.
I’m all fine with the pushback concept and generally what you advertise as objective is in line with (although not covering in full) Gelman’s and my list of desiderata. However the use of the term “objective” may encourage some people to see more in it than is actually achieved (which has some limits – not sure whether this is the right place and time to elaborate them).
Thank you. Will you write something around mid-Jan? I believe Gelman might as well.
Yes, I hope I can do that.
Here’s a link to the Gelman and Hennig (2017) paper “Beyond subjective and objective in statistics”. My first paper on “objectivity” was “A Objective Theory of Statistics” in 1981.
Click to access gelman-Hennig-2017-published.pdf