Monthly Archives: March 2021

The Stat Wars and Intellectual conflicts of interest: Journal Editors

 

Like most wars, the Statistics Wars continues to have casualties. Some of the reforms thought to improve reliability and replication may actually create obstacles to methods known to improve on reliability and replication. At each one of our meeting of the Phil Stat Forum: “The Statistics Wars and Their Casualties,” I take 5 -10 minutes to draw out a proper subset of casualties associated with the topic of the presenter for the day. (The associated workshop that I have been organizing with Roman Frigg at the London School of Economics (CPNSS) now has a date for a hoped for in-person meeting in London: 24-25 September 2021.) Of course we’re interested not just in casualties but in positive contributions, though what counts as a casualty and what a contribution is itself a focus of philosophy of statistics battles.

At our last meeting, Thursday, 25 March, Mark Burgman, Director of the Centre for Environmental Policy at Imperial College London and Editor-in-Chief of the journal Conservation Biology, spoke on “How should applied science journal editors deal with statistical controversies?“. His slides are here:  (pdf). The casualty I focussed on is how the statistics wars may put journal editors in positions of conflicts of interest that can get in the way of transparency and avoidance of bias. I presented it in terms of 4 questions (nothing to do with the fact that it’s currently Passover):

 

D. Mayo’s Casualties: Intellectual Conflicts of Interest: Questions for Burgman

 

  1. In an applied field such as conservation science, where statistical inferences often are the basis for controversial policy decisions, should editors and editorial policies avoid endorsing one side of the long-standing debate revolving around statistical significance tests?  Or should they adopt and promote a favored methodology?
  2. If editors should avoid taking a side in setting author’s guidelines and reviewing papers, what policies should be adopted to avoid deferring to the calls of those wanting them to change their author’s guidelines? Have you ever been encouraged to do so?
  3. If one has a strong philosophical statistical standpoint and a strong interest in persuading others to accept it, does it create a conflict of interest, if that person has power to enforce that philosophy (especially in a group already driven by perverse incentives)? If so, what is your journal doing to take account of and prevent conflicts of interest?
  4. What do you think of the March 2019 Editorial of The American Statistician (Wasserstein et al., 2019) Don’t say “statistical significance” and don’t use predesignated p-value thresholds in interpreting data (e.g., .05, .01, .005).

(While not an ASA policy document, Wasserstein’s status as ASA executive director gave it a lot of clout. Should he have issued a disclaimer that the article only represents the authors’ views?) [1]

This is the first of some posts on intellectual conflicts of interest that I’ll be writing shortly. [2]


Mark Burgman’s presentation (Link)

D. Mayo’s Casualties (Link)


[1] For those who don’t know the story: Because no disclaimer was issued, the ASA Board appointed a new task force on Statistical Significance and Reproducibility in 2019 to provide recommendations. These have thus far not been made public. For the background, see this post.

Burgman said that he had received a request to follow the “don’t say significance, don’t use P-value thresholds”, but upon considering it with colleagues, they decided against it. Why not include, as part of journal information shared with authors, that the editors consider it important to retain a variety of statistical methodologies–correctly used–and have explicitly rejected the call to ban any of them (even if they come with official association letterhead).

[2] WordPress has just sprung a radical change on bloggers, and as I haven’t figured it out yet, and my blog assistant is unavailable, I’ve cut this post short.

Categories: Error Statistics | Leave a comment

Reminder: March 25 “How Should Applied Science Journal Editors Deal With Statistical Controversies?” (Mark Burgman)

The seventh meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

25 March, 2021

TIME: 15:00-16:45 (London); 11:00-12:45 (New York, NOTE TIME CHANGE TO MATCH UK TIME**)

For information about the Phil Stat Wars forum and how to join, click on this link.

How should applied science journal editors deal with statistical controversies?

Mark Burgman

Mark Burgman is the Director of the Centre for Environmental Policy at Imperial College London and Editor-in-Chief of the journal Conservation Biology, Chair in Risk Analysis & Environmental Policy. Previously, he was Adrienne Clarke Chair of Botany at the University of Melbourne, Australia. He works on expert judgement, ecological modelling, conservation biology and risk assessment. He has written models for biosecurity, medicine regulation, marine fisheries, forestry, irrigation, electrical power utilities, mining, and national park planning. He received a BSc from the University of New South Wales (1974), an MSc from Macquarie University, Sydney (1981), and a PhD from the State University of New York at Stony Brook (1987). He worked as a consultant ecologist and research scientist in Australia, the United States and Switzerland during the 1980’s before joining the University of Melbourne in 1990. He joined CEP in February, 2017. He has published over two hundred and fifty refereed papers and book chapters and seven authored books. He was elected to the Australian Academy of Science in 2006.

Abstract: Applied sciences come with different focuses. In environmental science, as in epidemiology, the framing and context of problems is often in crises. Decisions are imminent, data and understanding are incomplete, and ramifications of decisions are substantial. This context makes the implications of inferences from data especially poignant. It also makes the claims made by fervent and dedicated authors especially challenging. The full gamut of potential statistical foibles and psychological frailties are on display. In this presentation, I will outline and summarise the kinds of errors of reasoning that are especially prevalent in ecology and conservation biology. I will outline how these things appear to be changing, providing some recent examples. Finally, I will describe some implications of alternative editorial policies.

Some questions:

*Would it be a good thing to dispense with p-values, either through encouragement or through strict editorial policy?

*Would it be a good thing to insist on confidence intervals?

*Should editors of journals in a broad discipline, band together and post common editorial policies for statistical inference?

*Should all papers be reviewed by a professional statistician?

If so, which kind?


Readings/Slides:

Professor Burgman is developing this topic anew, so we don’t have the usual background reading. However, we do have his slides:

*Mark Burgman’s Draft Slides:  “How should applied science journal editors deal with statistical controversies?” (pdf)

*D. Mayo’s Slides: “The Statistics Wars and Their Casualties for Journal Editors: Intellectual Conflicts of Interest: Questions for Burgman” (pdf)

*A paper of mine from the Joint Statistical Meetings, “Rejecting Statistical Significance Tests: Defanging the Arguments”, discusses an episode that is relevant for the general topic of how journal editors should deal with statistical controversies.


Video Links: 

Mark Burgman’s presentation:

D. Mayo’s Casualties:

Please feel free to continue the discussion by posting questions or thoughts in the comments section on this PhilStatWars post.


*Meeting 15 of our the general Phil Stat series which began with the LSE Seminar PH500 on May 21

**UK doesn’t change their clock until March 28.

Categories: ASA Guide to P-values, confidence intervals and tests, P-values, significance tests | Tags: , | 1 Comment

Pandemic Nostalgia: The Corona Princess: Learning from a petri dish cruise (reblog 1yr)

.

Last week, giving a long postponed talk for the NY/NY Metro Area Philosophers of Science Group (MAPS), I mentioned how my book Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP) invites the reader to see themselves on a special interest cruise as we revisit old and new controversies in the philosophy of statistics–noting that I had no idea in writing the book that cruise ships would themselves become controversial in just a few years. The first thing I wrote during early pandemic days last March was this post on the Diamond Princess. The statistics gleaned from the ship remain important resources which haven’t been far off in many ways. I reblog it here. Continue reading

Categories: covid-19, memory lane | Leave a comment

March 25 “How Should Applied Science Journal Editors Deal With Statistical Controversies?” (Mark Burgman)

The seventh meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

25 March, 2021

TIME: 15:00-16:45 (London); 11:00-12:45 (New York, NOTE TIME CHANGE)

For information about the Phil Stat Wars forum and how to join, click on this link.

How should applied science journal editors deal with statistical controversies?

Mark Burgman Continue reading

Categories: ASA Guide to P-values, confidence intervals and tests, P-values, significance tests | Tags: , | 1 Comment

Falsifying claims of trust in bat coronavirus research: mysteries of the mine (i)-(iv)

.

Have you ever wondered if people read Master’s (or even Ph.D) theses a decade out? Whether or not you have, I think you will be intrigued to learn the story of why an obscure Master’s thesis from 2012, translated from Chinese in 2020, is now an integral key for unravelling the puzzle of the global controversy about the mechanism and origins of Covid-19. The Master’s thesis by a doctor, Li Xu [1], “The Analysis of 6 Patients with Severe Pneumonia Caused by Unknown Viruses”, describes 6 patients he helped to treat after they entered a hospital in 2012, one after the other, suffering from an atypical pneumonia from cleaning up after bats in an abandoned copper mine in China. Given the keen interest in finding the origin of the 2002–2003 severe acute respiratory syndrome (SARS) outbreak, Li wrote: “This makes the research of the bats in the mine where the six miners worked and later suffered from severe pneumonia caused by unknown virus a significant research topic”. He and the other doctors treating the mine cleaners hypothesized that their diseases were caused by a SARS-like coronavirus from having been in close proximity to the bats in the mine. Continue reading

Categories: covid-19, falsification, science communication | 18 Comments

Blog at WordPress.com.