Wasserstein et al 2019

Don’t divorce statistical inference from “statistical thinking”: some exchanges

 

.

A topic that came up in some comments recently reflects a recent tendency to divorce statistical inference (bad) from statistical thinking (good), and it deserves the spotlight of a post. I always alert authors of papers that come up on this blog, inviting them to comment, and one from Christopher Tong (reacting to a comment on Ron Kenett) concerns this dichotomy.

Response by Christopher Tong to D. Mayo’s July 14 comment

TONG: In responding to Prof. Kenett, Prof. Mayo states: “we should reject the supposed dichotomy between ‘statistical method and statistical thinking’ which unfortunately gives rise to such titles as ‘Statistical inference enables bad science, statistical thinking enables good science,’ in the special TAS 2019 issue. This is nonsense.” [Mayo July 14 comment here.] Continue reading

Categories: statistical inference vs statistical thinking, statistical significance tests, Wasserstein et al 2019 | 11 Comments

Andrew Gelman (Guest post): (Trying to) clear up a misunderstanding about decision analysis and significance testing

.

Professor Andrew Gelman
Higgins Professor of Statistics
Professor of Political Science
Director of the Applied Statistics Center
Columbia University

 

(Trying to) clear up a misunderstanding about decision analysis and significance testing

Background

In our 2019 article, Abandon Statistical Significance, Blake McShane, David Gal, Christian Robert, Jennifer Tackett, and I talk about three scenarios: summarizing research, scientific publication, and decision making.

In making our recommendations, we’re not saying it will be easy; we’re just saying that screening based on statistical significance has lots of problems. P-values and related measures are not useless—there can be value in saying that an estimate is only 1 standard error away from 0 and so it is consistent with the null hypothesis, or that an estimate is 10 standard errors from zero and so the null can be rejected, or than an estimate is 2 standard errors from zero, which is something that we would not usually see if the null hypothesis were true. Comparison to a null model can be a useful statistical tool, in its place. The problem we see with “statistical significance” is when this tool is used as a dominant or default or master paradigm: Continue reading

Categories: abandon statistical significance, gelman, statistical significance tests, Wasserstein et al 2019 | 29 Comments

Guest Post: Ron Kenett: What’s happening in statistical practice since the “abandon statistical significance” call

.

Ron S. Kenett
Chairman of the KPA Group;
Senior Research Fellow, the Samuel Neaman Institute, Technion, Haifa;
Chairman, Data Science Society, Israel

 

What’s happening in statistical practice since the “abandon statistical significance” call

This is a retrospective view from experience gained by applying statistics to a wide range of problems, with an emphasis on the past few years. The post is kept at a general level in order to provide a bird’s eye view of the points being made. Continue reading

Categories: abandon statistical significance, Wasserstein et al 2019 | 26 Comments

Guest Post (part 2 of 2): Daniël Lakens: “How were we supposed to move beyond  p < .05, and why didn’t we?”

.

Professor Daniël Lakens
Human Technology Interaction
Eindhoven University of Technology

[Some earlier posts by D. Lakens on this topic are at the end of this post]*

This continues Part 1:

4: Most do not offer any alternative at all

At this point, it might be worthwhile to point out that most of the contributions to the special issue do not discuss alternative approaches to p < .05 at all. They discuss general problems with low quality research (Kmetz, 2019), the importance of improving quality control (D. W. Hubbard & Carriquiry, 2019), results blind reviewing (Locascio, 2019), or the role of subjective judgment (Brownstein et al., 2019). There are historical perspectives on how we got to this point (Kennedy-Shaffer, 2019), ideas about how science should work instead, many stressing the importance of replication studies (R. Hubbard et al., 2019; Tong, 2019). Note that Trafimow both recommends replication as an alternative (Trafimow, 2019), but also co-authors a paper stating we should not expect findings to replicate (Amrhein et al., 2019), thereby directly contradicting himself within the same special issue. Others propose not simply giving up on p-values, but on generalizable knowledge (Amrhein et al., 2019). The suggestion is to only report descriptive statistics. Continue reading

Categories: abandon statistical significance, D. Lakens, Wasserstein et al 2019 | 13 Comments

Blog at WordPress.com.