statistical significance tests

My BJPS paper: Severe Testing: Error Statistics versus Bayes Factor Tests

.

In my new paper, “Severe Testing: Error Statistics versus Bayes Factor Tests”, now out online at the The British Journal for the Philosophy of Science, I “propose that commonly used Bayes factor tests be supplemented with a post-data severity concept in the frequentist error statistical sense”. But how? I invite your thoughts on this and any aspect of the paper.* (You can read it here.)

I’m pasting down the abstract and the introduction. Continue reading

Categories: Bayesian/frequentist, Likelihood Principle, multiple testing | 4 Comments

Response to Ben Recht’s post (“What is Statistics’ Purpose?”) on my Neyman seminar (ii)

.

There was a very valuable panel discussion after my October 9 Neyman Seminar in the Statistics Department at UC Berkeley.  I want to respond to many of the questions put forward by the participants (Ben Recht, Philip Stark, Bin Yu, Snow Zhang)  that we did not address during that panel. Slides from my presentation, “Severity as a basic concept of philosophy of statistics” are at the end of this post (but with none of the animations). I begin in this post by responding to Ben Recht, a professor of Artificial Intelligence and Computer Science at Berkeley, and his recent blogpost, What is Statistics’ Purpose? On severe testing, regulation, and butter passing, on my talk. I will consider: (1) A complex or leading question; (2) Why I chose to focus about Neyman’s philosophy of statistics and (3) What the “100 years of fighting and browbeating” were/are all about. Continue reading

Categories: affirming the consequent, Ben Recht, Neyman, P-values, Severity, statistical significance tests, statistics wars | 10 Comments

Don’t divorce statistical inference from “statistical thinking”: some exchanges

 

.

A topic that came up in some comments recently reflects a recent tendency to divorce statistical inference (bad) from statistical thinking (good), and it deserves the spotlight of a post. I always alert authors of papers that come up on this blog, inviting them to comment, and one from Christopher Tong (reacting to a comment on Ron Kenett) concerns this dichotomy.

Response by Christopher Tong to D. Mayo’s July 14 comment

TONG: In responding to Prof. Kenett, Prof. Mayo states: “we should reject the supposed dichotomy between ‘statistical method and statistical thinking’ which unfortunately gives rise to such titles as ‘Statistical inference enables bad science, statistical thinking enables good science,’ in the special TAS 2019 issue. This is nonsense.” [Mayo July 14 comment here.] Continue reading

Categories: statistical inference vs statistical thinking, statistical significance tests, Wasserstein et al 2019 | 11 Comments

Andrew Gelman (Guest post): (Trying to) clear up a misunderstanding about decision analysis and significance testing

.

Professor Andrew Gelman
Higgins Professor of Statistics
Professor of Political Science
Director of the Applied Statistics Center
Columbia University

 

(Trying to) clear up a misunderstanding about decision analysis and significance testing

Background

In our 2019 article, Abandon Statistical Significance, Blake McShane, David Gal, Christian Robert, Jennifer Tackett, and I talk about three scenarios: summarizing research, scientific publication, and decision making.

In making our recommendations, we’re not saying it will be easy; we’re just saying that screening based on statistical significance has lots of problems. P-values and related measures are not useless—there can be value in saying that an estimate is only 1 standard error away from 0 and so it is consistent with the null hypothesis, or that an estimate is 10 standard errors from zero and so the null can be rejected, or than an estimate is 2 standard errors from zero, which is something that we would not usually see if the null hypothesis were true. Comparison to a null model can be a useful statistical tool, in its place. The problem we see with “statistical significance” is when this tool is used as a dominant or default or master paradigm: Continue reading

Categories: abandon statistical significance, gelman, statistical significance tests, Wasserstein et al 2019 | 29 Comments

2-4 yr review: Commentaries on my Editorial: several are published

I’m reblogging reader commentaries on my editorial, “The statistics wars and intellectual conflicts of interest“. 3 are published in Conservation Biology; a 4th, by Lakens, is in the Journal of the International Society of Physiotherapy.  This post was first published on May 15, 2022. Thus, “soon to be” refers to the past. Share your remarks in the comments. Continue reading

Categories: 4 years ago!, statistical significance tests, The Statistics Wars and Their Casualties | Leave a comment

5-year review: “Les stats, c’est moi”: We take that step here! (Adopt our fav word or phil stat!)(iii)

 

les stats, c’est moi

This is the last of the selected posts I will reblog from 5 years ago on the 2019 statistical significance controversy. The original post, published on this blog on December 13, 2019, had 85 comments, so you might find them of interest.  I invite readers to share their thoughts as to where the field is now, in relation to that episode, and to alternatives being used as replacements for statistical significance tests. Use the comments and send me guest posts.  Continue reading

Categories: 5-year memory lane, Error Statistics, statistical significance tests | Leave a comment

5-year Review: The ASA’s P-value Project: Why it’s Doing More Harm than Good (cont from 11/4/19)

.

I continue my selective 5-year review of some of the posts revolving around the statistical significance test controversy from 2019. This post was first published on the blog on November 14, 2019. I feared then that many of the howlers of statistical significance tests would be further etched in granite after the ASA’s P-value project, and in many quarters this is, unfortunately, true. One that I’ve noticed quite a lot is the (false) supposition that negative results are uninformative. Some fields, notably psychology, keep to a version of simple Fisherian tests, ignoring Neyman-Pearson (N-P) tests (never minding that Jacob Cohen was a psychologist who gave us “power analysis”).  (See note [1]) For N-P, “it is immaterial which of the two alternatives…is labelled the hypothesis tested” (Neyman 1950, 259). Failing to find evidence of a genuine effect, coupled with a test’s having high capability to detect meaningful effects, warrants inferring the absence of meaningful effects. Even with the simple Fisherian test, failing to reject H0 is informative. Null results figure importantly throughout science, such as when the ether was falsified by Michelson-Morley, and in directing attention away from unproductive theory development.

Please share your comments on this blogpost. Continue reading

Categories: 5-year memory lane, statistical significance tests, straw person fallacy | 1 Comment

5-year Review: P-Value Statements and Their Unintended(?) Consequences: The June 2019 ASA President’s Corner (b)

I continue my 5-year review of some highlights from the “abandon significance” movement from 2019. This post was first published on this blog on November 30, 2019,  It was based on a call by then American Statistical Association President, Karen Kafadar, which sparked a counter-movement. I will soon begin sharing a few invited guest posts reflecting on current thinking either on the episode or on statistical methodology more generally. I may continue to post such reflections over the summer, as they come in, so let me know if you’d like to contribute something. Share your thoughts in the comments.

2208388671_0d8bc38714

Mayo writing to Kafadar

I never met Karen Kafadar, the 2019 President of the American Statistical Association (ASA), but the other day I wrote to her in response to a call in her extremely interesting June 2019 President’s Corner: “Statistics and Unintended Consequences“:

  • “I welcome your suggestions for how we can communicate the importance of statistical inference and the proper interpretation of p-values to our scientific partners and science journal editors in a way they will understand and appreciate and can use with confidence and comfort—before they change their policies and abandon statistics altogether.”

I only recently came across her call, and I will share my letter below. First, here are some excerpts from her June President’s Corner (her December report is due any day). Continue reading

Categories: 5-year memory lane, stat wars and their casualties, statistical significance tests | Leave a comment

The First 2023 Act of Stat Activist Watch: Statistics ‘for the people’

One of the central roles I proposed for “stat activists” (after our recent workshop, The Statistics Wars and Their Casualties) is to critically scrutinize mistaken claims about leading statistical methods–especially when such claims are put forward as permissible viewpoints to help “the people” assess methods in an unbiased manner. The first act of 2023 under this umbrella concerns an article put forward as “statistics for the people” in a journal of radiation oncology. We are talking here about recommendations for analyzing data for treating cancer!  Put forward as a fair-minded, or at least an informative, comparison of Bayesian vs frequentist methods, I find it to be little more than an advertisement for subjective Bayesian methods in favor of a caricature of frequentist error statistical methods. The journal’s “statistics for the people” section would benefit from a full-blown article on frequentist error statistical methods–not just the letter of ours they recently published–but I’m grateful to Chowdhry and other colleagues who joined me in this effort. You will find our letter below, followed by the authors’ response. You can also find a link to their original “statistics for the people” article in the references. Let me admit right off that my criticisms are a bit stronger than my co-authors. Continue reading

Categories: stat activist watch 2023, statistical significance tests | 2 Comments

D. Lakens responds to confidence interval crusading journal editors

.

In what began as a guest commentary on my 2021 editorial in Conservation Biology, Daniël Lakens recently published a response to a recommendation against using null hypothesis significance tests by journal editors from the International Society of Physiotherapy Journal. Here are some excerpts from his full article, replies (‘response to Lakens‘), links and a few comments of my own. Continue reading

Categories: stat wars and their casualties, statistical significance tests | 12 Comments

Join me in reforming the “reformers” of statistical significance tests

.

The most surprising discovery about today’s statistics wars is that some who set out shingles as “statistical reformers” themselves are guilty of misdefining some of the basic concepts of error statistical tests—notably power. (See my recent post on power howlers.) A major purpose of my Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP) is to clarify basic notions to get beyond what I call “chestnuts” and “howlers” of tests. The only way that disputing tribes can get beyond the statistics wars is by (at least) understanding correctly the central concepts. But these misunderstandings are more common than ever, so I’m asking readers to help. Why are they more common (than before the “new reformers” of the last decade)? I suspect that at least one reason is the popularity of Bayesian variants on tests: if one is looking to find posterior probabilities of hypotheses, then error statistical ingredients may tend to look as if that’s what they supply.  Continue reading

Categories: power, SIST, statistical significance tests | Tags: , , | 2 Comments

Sir David Cox: Significance tests: rethinking the controversy (September 5, 2018 RSS keynote)

Sir David Cox speaking at the RSS meeting in a session: “Significance Tests: Rethinking the Controversy” on 5 September 2018.

Continue reading

Categories: Sir David Cox, statistical significance tests | Tags: | Leave a comment

John Park: Poisoned Priors: Will You Drink from This Well?(Guest Post)

.

John Park, MD
Radiation Oncologist
Kansas City VA Medical Center

Poisoned Priors: Will You Drink from This Well?

As an oncologist, specializing in the field of radiation oncology, “The Statistics Wars and Intellectual Conflicts of Interest”, as Prof. Mayo’s recent editorial is titled, is one of practical importance to me and my patients (Mayo, 2021). Some are flirting with Bayesian statistics to move on from statistical significance testing and the use of P-values. In fact, what many consider the world’s preeminent cancer center, MD Anderson, has a strong Bayesian group that completed 2 early phase Bayesian studies in radiation oncology that have been published in the most prestigious cancer journal —The Journal of Clinical Oncology (Liao et al., 2018 and Lin et al, 2020). This brings about the hotly contested issue of subjective priors and much ado has been written about the ability to overcome this problem. Specifically in medicine, one thinks about Spiegelhalter’s classic 1994 paper mentioning reference, clinical, skeptical, or enthusiastic priors who also uses an example from radiation oncology (Spiegelhalter et al., 1994) to make his case. This is nice and all in theory, but what if there is ample evidence that the subject matter experts have major conflicts of interests (COIs) and biases so that their priors cannot be trusted?  A debate raging in oncology, is whether non-invasive radiation therapy is as good as invasive surgery for early stage lung cancer patients. This is a not a trivial question as postoperative morbidity from surgery can range from 19-50% and 90-day mortality anywhere from 0–5% (Chang et al., 2021). Radiation therapy is highly attractive as there are numerous reports hinting at equal efficacy with far less morbidity. Unfortunately, 4 major clinical trials were unable to accrue patients for this important question. Why could they not enroll patients you ask? Long story short, if a patient is referred to radiation oncology and treated with radiation, the surgeon loses out on the revenue, and vice versa. Dr. David Jones, a surgeon at Memorial Sloan Kettering, notes there was no “equipoise among enrolling investigators and medical specialties… Although the reasons are multiple… I believe the primary reason is financial” (Jones, 2015). I am not skirting responsibility for my field’s biases. Dr. Hanbo Chen, a radiation oncologist, notes in his meta-analysis of multiple publications looking at surgery vs radiation that overall survival was associated with the specialty of the first author who published the article (Chen et al, 2018). Perhaps the pen is mightier than the scalpel! Continue reading

Categories: ASA Task Force on Significance and Replicability, Bayesian priors, PhilStat/Med, statistical significance tests | Tags: | 4 Comments

Should Bayesian Clinical Trialists Wear Error Statistical Hats? (i)

 

I. A principled disagreement

The other day I was in a practice (zoom) for a panel I’m in on how different approaches and philosophies (Frequentist, Bayesian, machine learning) might explain “why we disagree” when interpreting clinical trial data. The focus is radiation oncology.[1] An important point of disagreement between frequentist (error statisticians) and Bayesians concerns whether and if so, how, to modify inferences in the face of a variety of selection effects, multiple testing, and stopping for interim analysis. Such multiplicities directly alter the capabilities of methods to avoid erroneously interpreting data, so the frequentist error probabilities are altered. By contrast, if an account conditions on the observed data, error probabilities drop out, and we get principles such as the stopping rule principle. My presentation included a quote from Bayarri and J. Berger (2004): Continue reading

Categories: multiple testing, statistical significance tests, strong likelihood principle | 26 Comments

Invitation to discuss the ASA Task Force on Statistical Significance and Replication

.

The latest salvo in the statistics wars comes in the form of the publication of The ASA Task Force on Statistical Significance and Replicability, appointed by past ASA president Karen Kafadar in November/December 2019. (In the ‘before times’!) Its members are:

Linda Young, (Co-Chair), Xuming He, (Co-Chair) Yoav Benjamini, Dick De Veaux, Bradley Efron, Scott Evans, Mark Glickman, Barry Graubard, Xiao-Li Meng, Vijay Nair, Nancy Reid, Stephen Stigler, Stephen Vardeman, Chris Wikle, Tommy Wright, Karen Kafadar, Ex-officio. (Kafadar 2020)

The full report of this Task Force is in the The Annals of Applied Statistics, and on my blogpost. It begins:

In 2019 the President of the American Statistical Association (ASA) established a task force to address concerns that a 2019 editorial in The American Statistician (an ASA journal) might be mistakenly interpreted as official ASA policy. (The 2019 editorial recommended eliminating the use of “p < 0.05” and “statistically significant” in statistical analysis.) This document is the statement of the task force… (Benjamini et al. 2021)

Continue reading

Categories: 2016 ASA Statement on P-values, ASA Task Force on Significance and Replicability, JSM 2020, National Institute of Statistical Sciences (NISS), statistical significance tests | 3 Comments

Why hasn’t the ASA Board revealed the recommendations of its new task force on statistical significance and replicability?

something’s not revealed

A little over a year ago, the board of the American Statistical Association (ASA) appointed a new Task Force on Statistical Significance and Replicability (under then president, Karen Kafadar), to provide it with recommendations. [Its members are here (i).] You might remember my blogpost at the time, “Les Stats C’est Moi”. The Task Force worked quickly, despite the pandemic, giving its recommendations to the ASA Board early, in time for the Joint Statistical Meetings at the end of July 2020. But the ASA hasn’t revealed the Task Force’s recommendations, and I just learned yesterday that it has no plans to do so*. A panel session I was in at the JSM, (P-values and ‘Statistical Significance’: Deconstructing the Arguments), grew out of this episode, and papers from the proceedings are now out. The introduction to my contribution gives you the background to my question, while revealing one of the recommendations (I only know of 2). Continue reading

Categories: 2016 ASA Statement on P-values, JSM 2020, replication crisis, statistical significance tests, straw person fallacy | 8 Comments

The Statistics Debate! (NISS DEBATE, October 15, Noon – 2 pm ET)

October 15, Noon – 2 pm ET (Website)

Where do YOU stand?

Given the issues surrounding the misuses and abuse of p-values, do you think p-values should be used? Continue reading

Categories: Announcement, J. Berger, P-values, Philosophy of Statistics, reproducibility, statistical significance tests, Statistics | Tags: | 9 Comments

My paper, “P values on Trial” is out in Harvard Data Science Review

.

My new paper, “P Values on Trial: Selective Reporting of (Best Practice Guides Against) Selective Reporting” is out in Harvard Data Science Review (HDSR). HDSR describes itself as a A Microscopic, Telescopic, and Kaleidoscopic View of Data Science. The editor-in-chief is Xiao-li Meng, a statistician at Harvard. He writes a short blurb on each article in his opening editorial of the issue. Continue reading

Categories: multiple testing, P-values, significance tests, Statistics | 29 Comments

On Some Self-Defeating Aspects of the ASA’s (2019) Recommendations on Statistical Significance Tests (ii)

.

“Before we stood on the edge of the precipice, now we have taken a great step forward”

 

What’s self-defeating about pursuing statistical reforms in the manner taken by the American Statistical Association (ASA) in 2019? In case you’re not up on the latest in significance testing wars, the 2016 ASA Statement on P-Values and Statistical Significance, ASA I, arguably, was a reasonably consensual statement on the need to avoid some well-known abuses of P-values–notably if you compute P-values, ignoring selective reporting, multiple testing, or stopping when the data look good, the computed P-value will be invalid. (Principle 4, ASA I) But then Ron Wasserstein, executive director of the ASA, and co-editors, decided they weren’t happy with their own 2016 statement because it “stopped just short of recommending that declarations of ‘statistical significance’ be abandoned” altogether. In their new statement–ASA II(note)–they announced: “We take that step here….Statistically significant –don’t say it and don’t use it”.

Why do I say it is a mis-take to have taken the supposed next “great step forward”? Why do I count it as unsuccessful as a piece of statistical science policy? In what ways does it make the situation worse? Let me count the ways. The first is in this post. Others will come in following posts, until I become too disconsolate to continue.[i] Continue reading

Categories: P-values, stat wars and their casualties, statistical significance tests | 14 Comments

Blog at WordPress.com.