Kent Staley: Commentary on “The statistics wars and intellectual conflicts of interest” (Guest Post)

.


Kent Staley

Professor
Department of Philosophy
Saint Louis University

 

Commentary on “The statistics wars and intellectual conflicts of interest” (Mayo editorial)

In her recent Editorial for Conservation Biology, Deborah Mayo argues that journal editors “should avoid taking sides” regarding “heated disagreements about statistical significance tests.” Particularly, they should not impose bans suggested by combatants in the “statistics wars” on statistical methods advocated by the opposing side, such as Wasserstein et al.’s (2019) proposed ban on the declaration of statistical significance and use of p value thresholds. Were journal editors to adopt such proposals, Mayo argues, they would be acting under a conflict of interest (COI) of a special kind: an “intellectual” conflict of interest.

Conflicts of interest are worrisome because of the potential for bias. Researchers will no doubt be all too familiar with the institutional/bureaucratic requirement of declaring financial interests. Whether such disclosures provide substantive protections against bias or simply satisfy a “CYA” requirement of administrators, the rationale is that assessment of research outcomes can incorporate information relevant to the question of whether the investigators have arrived at a conclusion that overstates (or even fabricates) the support for a claim, when the acceptance of that claim would financially benefit them. This in turn ought to reduce the temptation of investigators to engage in such inflation or fabrication of support. The idea obviously applies quite naturally to editorial decisions as well as research conclusions.

Mayo’s “intellectual” COIs differ from this familiar case. The relevant interests of (in this case) journal editors are not financial, but concern policies governing the conduct of science itself.

One might object that journal editors are entrusted with decision-making power precisely to adopt and act upon such policies, and this distinguishes intellectual COIs from financial ones. Journal editors, according to this view, are responsible for making informed and reasoned judgments about the standards that distinguish credible research conclusions. They cannot do so if they are barred from adopting standards in accord with their personal judgments. To have an intellectual interest in a policy is simply to think that it is a good idea, and shouldn’t journal editors act on good ideas when they (think that they) have them?

To continue the objection, take an example from the field of particle physics: The editors of Physical Review D surely ought to be free to impose the requirement that claims to have “observed” a new phenomenon cannot be published unless the putative signal for that phenomenon constitutes at least a 5s departure from the null hypothesis prediction. They ought to have the ability to impose such a requirement even though there are some members of the particle physics community who are critical of that policy, or who (perhaps because they prefer Bayesian analyses) reject even the use of significance calculations as a requirement of discovery claims.

Perhaps such an objection might be encouraged by the idea of an intellectual COI, but I think it misses the point of Mayo’s argument. The dispute within the statistical community over significance testing, and the “statistics wars” more generally, is fundamentally a philosophical one, or at least involves, in Mayo’s words, “philosophical presuppositions.” These presuppositions concern such fundamental aspects of scientific inquiry as “what is the purpose of a statistical test?” and “do the beliefs of investigators matter to how the results of inquiry are characterized, and if so, how?” Philosophical disputes tend to have a bad reputation among non-philosophers because they are often thought to be never-ending or even unresolvable in principle. Perhaps some are, but even in those cases (and I don’t think this is one), there is at least the possibility for progress in terms of clarifying what is at stake and eliminating non-viable positions from consideration. In any case, so long as competing methodological approaches in a given field rest upon differing philosophical presuppositions, about which there is legitimate and ongoing disagreement, to preclude the use of one of those approaches as a matter of editorial policy would be to foreclose on the possibility of engaging that philosophical dispute at the level of scientific practice. The consequences of that foreclosure for the scientific discipline itself would be impoverishing.

All commentaries on Mayo (2021) editorial until Jan 31, 2022 (more to come*)

Schachtman
Park
Dennis
Stark
Staley
Pawitan
Hennig
Ionides and Ritov
Haig
Lakens

*Let me know if you wish to write one

Categories: conflicts of interest, editors, intellectual COI, significance tests, statistical tests | 6 Comments

Post navigation

6 thoughts on “Kent Staley: Commentary on “The statistics wars and intellectual conflicts of interest” (Guest Post)

  1. Kent:
    Thank you so much for your commentary, and for attending yesterday’s forum. the reason I spent so much time clarifying the meaning of the “philosophical presuppositions” is that it has become a convenient way to deflect criticisms by anti-statistical significance testers to claim “it has nothing to do with philosophy”. They will then go on to say the problem is that p-values exaggerate evidence, are not evidence (because they violate the likelihood principle). Statistical significance tests have a “statistical philosophy” and it’s all about recognizing and controlling human biases in selectively favoring a view or intervention. It was wonderful to see over 70 attendees yesterday. The fight for free thinking about statistical method is far from over.

  2. Dan Riley

    A little clarification wrt the particle physics example: the 5 sigma requirement for a discovery is a community standard, not something imposed by PRD or any other journal. It also is not a cutoff for publication, just for calling it an observation. CMS and ATLAS both published 3-sigma Higgs results as “evidence for”, and there have been a zillion upper limits for various Higgs decay modes at various energies–which brings me to the most important point: in particle physics, there is no p-value threshold for publication. Our analyses are sufficiently standardized that null results are considered significant, and so pretty much any competent analysis gets published. If the null isn’t rejected, then a 95% CL upper limit is usually reported.

    • Dan:
      “Our analyses are sufficiently standardized that null results are considered significant, and so pretty much any competent analysis gets published.”

      Perhaps you’re saying even non (formally) significant results are valuable because they allow upper bounds to be set. Exactly. I always think HEP physics is a great exemplar for how “negative” results inform e.g., about future directions for theory development.

    • Thanks for this clarification, Dan. Of course, you are right on all points. In my attempt to very briefly point to an example of a statistical policy that seems at least apt, if not beyond dispute, I may have created the wrong impression of the basis and nature of the 5-sigma rule. I did refer to it as a requirement for claims of “observation” and not for publication per se. My impression of its relevance with respect to journal publication is that, although not a formal policy but a “community standard,” as you say, one can expect that a paper making a discovery claim without meeting the standard can be expected to meet editorial resistance. In any case, I agree with Deborah’s comments about the treatment of null results in HEP and I also regard HEP as a good example of how p-values can be given an appropriately constrained but positive role, understood within an overall pragmatic framework. I have written about that in my paper “Pragmatic warrant for frequentist statistical practice: the case of high energy physics” (Synthese 2017, v. 194:355-376). https://errorstatistics.files.wordpress.com

  3. Pingback: Paul Daniell & Yu-li Ko commentaries on Mayo’s ConBio Editorial | Error Statistics Philosophy

I welcome constructive comments that are of relevance to the post and the discussion, and discourage detours into irrelevant topics, however interesting, or unconstructive declarations that "you (or they) are just all wrong". If you want to correct or remove a comment, send me an e-mail. If readers have already replied to the comment, you may be asked to replace it to retain comprehension.

Blog at WordPress.com.