It’s not too late to register for Sessions #3 and #4 of our online Workshop. There will be 7 new (live) speakers and, for the the first time ever, the (short) movie; “The Recap of recaps” will be shown at the start of session #3. **registration form**

# Announcement

## SCHEDULE: The Statistics Wars and Their Casualties: 1 Dec & 8 Dec: Sessions 3 & 4

## Final Sessions: The Statistics Wars and Their Casualties: 1 December and 8 December

**The Statistics Wars**

**and Their Casualties**

**1 December and 8 December 2022
Sessions #3 and #4 **

**15:00-18:15 pm London Time/10:00am-1:15pm EST
**

**ONLINE**

**(London School of Economics, CPNSS)**

**registration form****For slides and videos of Sessions #1 and #2: see the workshop page**

**1 December**

**Session 3** (Moderator: Daniël Lakens, Eindhoven University of Technology)

**OPENING **

**“What Happened So Far”:**A medley (20 min) of recaps from Sessions 1 & 2: Deborah Mayo (Virginia Tech), Richard Morey (Cardiff), Stephen Senn (Edinburgh), Daniël Lakens (Eindhoven), Christian Hennig (Bologna) & Yoav Benjamini (Tel Aviv).

**SPEAKERS**

**Daniele Fanelli**(London School of Economics and Political Science)*The neglected importance of complexity in statistics and Metascience*(Abstract)**Stephan Guttinger**(University of Exeter)*What are questionable research practices?*(Abstract)**David J. Hand**(Imperial College, London)*What’s the question?*(Abstract)

**DISCUSSIONS**:

- Closing Panel:
**“Where Should Stat Activists Go From Here (Part i)?”**: Yoav Benjamini, Daniele Fanelli, Stephan Guttinger, David Hand, Christian Hennig, Daniël Lakens, Deborah Mayo, Richard Morey, Stephen Senn

**8 December**

**Session 4** (Moderator: Deborah Mayo, Virginia Tech)

**SPEAKERS**

**Jon Williamson**(University of Kent)*Causal inference is not statistical inference*(Abstract)**Margherita Harris**(London School of Economics and Political Science)*On Severity, the Weight of Evidence, and the Relationship Between the Two*(Abstract)**Aris Spanos**(Virginia Tech)*Revisiting the Two Cultures in Statistical Modeling and Inference as they relate to the Statistics Wars and Their Potential Casualties*(Abstract)**Uri Simonsohn**(Esade Ramon Llull University)*Mathematically Elegant Answers to Research Questions No One is Asking (meta-analysis, random effects models, and Bayes factors)*(Abstract)

**DISCUSSIONS**;

- Closing Panel:
**“Where Should Stat Activists Go From Here (Part ii)?”**: Workshop Participants: Yoav Benjamini, Alexander Bird, Mark Burgman, Daniele Fanelli, Stephan Guttinger, David Hand, Margherita Harris, Christian Hennig, Daniël Lakens, Deborah Mayo, Richard Morey, Stephen Senn, Uri Simonsohn, Aris Spanos, Jon Williamson

**********************************************************************

**DESCRIPTION:**While the field of statistics has a long history of passionate foundational controversy, the last decade has, in many ways, been the most dramatic. Misuses of statistics, biasing selection effects, and high-powered methods of big-data analysis, have helped to make it easy to find impressive-looking but spurious results that fail to replicate. As the crisis of replication has spread beyond psychology and social sciences to biomedicine, genomics, machine learning and other fields, the need for critical appraisal of proposed reforms is growing. Many are welcome (transparency about data, eschewing mechanical uses of statistics); some are quite radical. The experts do not agree on the best ways to promote trustworthy results, and these disagreements often reflect philosophical battles–old and new– about the nature of inductive-statistical inference and the roles of probability in statistical inference and modeling. Intermingled in the controversies about evidence are competing social, political, and economic values. If statistical consumers are unaware of assumptions behind rival evidence-policy reforms, they cannot scrutinize the consequences that affect them. What is at stake is a critical standpoint that we may increasingly be in danger of losing. Critically reflecting on proposed reforms and changing standards requires insights from statisticians, philosophers of science, psychologists, journal editors, economists and practitioners from across the natural and social sciences. This workshop will bring together these interdisciplinary insights–from speakers as well as attendees.

**Speakers/Panellists:**

**Yoav Benjamini**(Tel Aviv University),**Alexander Bird**(University of Cambridge),**Mark Burgman**(Imperial College London),**Daniele Fanelli**(London School of Economics and Political Science),**Roman Frigg**(London School of Economics and Political Science),**Stephan Guttinger**(University of Exeter),**David Hand**(Imperial College London),**Margherita Harris**(London School of Economics and Political Science),**Christian Hennig**(University of Bologna),**Daniël Lakens**(Eindhoven University of Technology),**Deborah M****a****yo**(Virginia Tech),**Richard Morey**(Cardiff University),**Stephen Senn**(Edinburgh, Scotland),**Uri Simonsohn**(Esade Ramon Llull University),**Aris Spanos**(Virginia Tech),**Jon Williamson**(University of Kent)

**Sponsors/Affiliations:**

- The Foundation for the Study of Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science (E.R.R.O.R.S.); Centre for Philosophy of Natural and Social Science (CPNSS), London School of Economics; Virginia Tech Department of Philosophy
**Organizers**: D. Mayo, R. Frigg and M. Harris(chief logistics and contact person): Jean Miller

Logistician

**Executive Planning Committee:**Y. Benjamini, D. Hand, D. Lakens, S. Senn

## Multiplicity, Data-Dredging, and Error Control Symposium at PSA 2022: Mayo, Thornton, Glymour, Mayo-Wilson, Berger

Some claim that no one attends Sunday morning (9am) sessions at the Philosophy of Science Association. But if you’re attending the PSA (in Pittsburgh), we hope you’ll falsify this supposition and come to hear us (Mayo, Thornton, Glymour, Mayo-Wilson, Berger) wrestle with some rival views on the trenchant problems of multiplicity, data-dredging, and error control. *Coffee and donuts to all who show up.*

*Multiplicity, Data-Dredging, and Error Control*

**November 13, 9:00 – 11:45 AM
(link to symposium on PSA website)**

**Speakers:** Continue reading

## Upcoming Workshop: The Statistics Wars and Their Casualties workshop

**The Statistics Wars
**

**and Their Casualties**

**22-23 September 2022
15:00-18:00 pm London Time*
**

**ONLINE**

**(London School of Economics, CPNSS)**

**To register for the workshop,
please fill out the registration form here.**

**For schedules and updated details, please see the workshop webpage: phil-stat-wars.com.**

***These will be sessions 1 & 2, there will be two more
online sessions (3 & 4) on December 1 & 8.
**

## Free access to Statistical Inference as Severe Testing: How to Get Beyond the Stat Wars” (CUP, 2018) for 1 more week

Thanks to CUP, the electronic version of my book, *Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018)*, is available for free for one more week (through August 31) at this link: https://www.cambridge.org/core/books/statistical-inference-as-severe-testing/D9DF409EF568090F3F60407FF2B973B2 * *Blurbs of the 16 tours in the book may be found here: blurbs of the 16 tours.

## Read It Free: “Stat Inference as Severe Testing: How to Get Beyond the Stat Wars” during August

CUP will make the electronic version of my book, *Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018)*, available to access for free from August 1-31 at this link: https://www.cambridge.org/core/books/statistical-inference-as-severe-testing/D9DF409EF568090F3F60407FF2B973B2 However, they will confirm the link closer to August, so check this blog on Aug 1 for any update, if you’re interested. (July 31, the link works!) * (August 5, the link is working. Let me know if you have problems getting in.) *Blurbs of the 16 tours in the book may be found here: blurbs of the 16 tours.

Here’s a CUP interview from when the book first came out.

## The Statistics Wars and Their Casualties Workshop-Now Online

**The Statistics Wars
**

**and Their Casualties**

**22-23 September 2022
15:00-18:00 pm London Time*
**

**ONLINE **

**To register for the workshop, please fill out the registration form here.**

***These will be sessions 1 & 2, there will be two more
The future online sessions (3 & 4) at 15:00-18:00 pm London Time on December 1 & 8.**

**Yoav Benjamini **(Tel Aviv University), **Alexander Bird** (University of Cambridge), **Mark Burgman** (Imperial College London), **Daniele Fanelli** (London School of Economics and Political Science), **Roman Frigg **(London School of Economics and Political Science), **
Stephan Guttinger** (University of Exeter),

**David Hand**(Imperial College London),

**Margherita Harris**(London School of Economics and Political Science),

**Christian Hennig**(University of Bologna),

**Daniël Lakens**(Eindhoven University of Technology),

**Deborah M**

**a**

**yo**(Virginia Tech),

**Richard Morey**(Cardiff University),

**Stephen Senn**(Edinburgh, Scotland),

**Jon Williamson**(University of Kent) Continue reading

## Philosophy of socially aware data science conference

I’ll be speaking at this conference in Philly tomorrow. My slides are also below.

**PDF of my slides:** Statistical “Reforms”: Fixing Science or Threats to Replication and Falsification. Continue reading

## Philosophy of Science Association (PSA) 22 Call for Contributed Papers

## PSA2022: Call for Contributed Papers

Twenty-Eighth Biennial Meeting of the Philosophy of Science Association

November 10 – November 13, 2022

Pittsburgh, Pennsylvania

**Submissions open on March 9, 2022 for contributed papers to be presented at the PSA2022 meeting in Pittsburgh, Pennsylvania, on November 10-13, 2022. The deadline for submitting a paper is 11:59 PM Pacific Standard Time on April 6, 2022. **

Contributed papers may be on any topic in the philosophy of science. The PSA2022 Program Committee is committed to assembling a program with high-quality papers on a variety of topics and diverse presenters that reflects the full range of current work in the philosophy of science. Continue reading

## “Should Science Abandon Statistical Significance?” Session at AAAS Annual Meeting, Feb 18

Karen Kafadar, Yoav Benjamini, and Donald Macnaughton will be in a session:

**Should Science Abandon Statistical Significance?**

**Should Science Abandon Statistical Significance?**

Friday, Feb 18 from 2-2:45 PM (EST) at the AAAS 2022 annual meeting.

The **general program** is here. To **register***, go to this page.

**Synopsis**

The concept of statistical significance is central in scientific research. However, the concept is often poorly understood and thus is often unfairly criticized. This presentation includes three independent but overlapping arguments about the usefulness of the concept of statistical significance to reliably detect “effects” in frontline scientific research data. We illustrate the arguments with examples of scientific importance from genomics, physics, and medicine. We explain how the concept of statistical significance provides a cost-efficient objective way to empower scientific research with evidence.

**Papers** Continue reading

## ENBIS Webinar: Statistical Significance and p-values

**Yesterday’s event video recording is available at:**

https://www.youtube.com/watch?v=2mWYbcVflyE&t=10s

**European Network for Business and Industrial Statistics (ENBIS) Webinar:**

**Statistical Significance and p-values**

** → Europe/Amsterdam (CET); 08:00-09:30 am (EST)**

**ENBIS will dedicate this webinar to the memory of Sir David Cox, who sadly passed away in January 2022.**

## January 11: Phil Stat Forum (remote): Statistical Significance Test Anxiety

*Special Session of the (remote)*

Phil Stat Forum:

*Special Session of the (remote)*

Phil Stat Forum:

Phil Stat Forum:

**11 January 2022**

**“Statistical Significance Test Anxiety”**

**TIME: 15:00-17:00 (London, GMT); 10:00-12:00 (EST)**

**Presenters: **Deborah Mayo (Virginia Tech) &

Yoav Benjamini (Tel Aviv University)

**Moderator: **David Hand (Imperial College London)

## January 11: Phil Stat Forum (remote)

*Special Session of the (remote)*

Phil Stat Forum:

*Special Session of the (remote)*

Phil Stat Forum:

Phil Stat Forum:

**11 January 2022**

**“Statistical Significance Test Anxiety”**

**TIME: 15:00-17:00 (London, GMT); 10:00-12:00 (EST)**

**Presenters: **Deborah Mayo (Virginia Tech) &

Yoav Benjamini (Tel Aviv University)

**Moderator: **David Hand (Imperial College London)

**Focus of the Session: **

## Our session is now remote: Philo of Sci Association (PSA): Philosophy IN Science (PinS): Can Philosophers of Science Contribute to Science?

*Philosophy in Science: Can Philosophers of Science Contribute to Science?*

on November 13, 2-4 pm

*Philosophy in Science: Can Philosophers of Science Contribute to Science?*

OUR SESSION HAS BECOME REMOTE: PLEASE JOIN US on ZOOM! This session revolves around the intriguing question: Can Philosophers of Science Contribute to Science? They’re calling it philosophy “in” science–when philosophical ministrations actually intervene in a science itself. This is the session I’ll be speaking in. I hope you will come to our session if you’re there–it’s hybrid, so you can’t see it through a remote link. But I’d like to hear what you think about this question–in the comments to this post. Continue reading

## CUNY zoom talk on Wednesday: Evidence as Passing a Severe Test

**If interested, write to me for the zoom link (error@vt.edu).**

## Why hasn’t the ASA Board revealed the recommendations of its new task force on statistical significance and replicability?

A little over a year ago, the board of the American Statistical Association (ASA) appointed a new Task Force on Statistical Significance and Replicability (under then president, Karen Kafadar), to provide it with recommendations. [Its members are here (i).] You might remember my blogpost at the time, “Les Stats C’est Moi”. The Task Force worked quickly, despite the pandemic, giving its recommendations to the ASA Board early, in time for the Joint Statistical Meetings at the end of July 2020. But the ASA hasn’t revealed the Task Force’s recommendations, and I just learned yesterday that it has no plans to do so*. A panel session I was in at the JSM, (P-values and ‘Statistical Significance’: Deconstructing the Arguments), grew out of this episode, and papers from the proceedings are now out. The introduction to my contribution gives you the background to my question, while revealing one of the recommendations (I only know of 2). Continue reading

## The Statistics Debate (NISS) in Transcript Form

I constructed, together with Jean Miller, a transcript from the **October 15 Statistics Debate** (with me, J. Berger and D. Trafimow and moderator D. Jeske), sponsored by NISS. It’s so much easier to access the material this way rather than listening to it on the video. **Using this link**, you can see the words and hear the video at the same time, as well as pause and jump around. Below, I’ve pasted our responses to Question #1. Have fun and please share your comments.

**Dan Jeske: [QUESTION 1] Given the issues surrounding the misuses and abuse of p values, do you think they should continue to be used or not? Why or why not?**

**Deborah Mayo **03:46

Thank you so much. And thank you for inviting me, I’m very pleased to be here. Yes, I say we should continue to use p values and statistical significance tests. Uses of p values are really just a piece in a rich set of tools intended to assess and control the probabilities of misleading interpretations of data, i.e., error probabilities. They’re the first line of defense against being fooled by randomness as Yoav Benjamini puts it. If even larger, or more extreme effects than you observed are frequently brought about by chance variability alone, i.e., p value not small, clearly you don’t have evidence of incompatibility with the mere chance hypothesis. It’s very straightforward reasoning. Even those who criticize p values you’ll notice will employ them, at least if they care to check their assumptions of their models. And this includes well known Bayesian such as George Box, Andrew Gelman, and Jim Berger. Critics of p values often allege it’s too easy to obtain small p values. But notice the whole replication crisis is about how difficult it is to get small p values with preregistered hypotheses. This shows the problem isn’t p values, but those selection effects and data dredging. However, the same data drenched hypothesis can occur in other methods, likelihood ratios, Bayes factors, Bayesian updating, except that now we lose the direct grounds to criticize inferences for flouting error statistical control. The introduction of prior probabilities, which may also be data dependent, offers further researcher flexibility. Those who reject p values are saying we should reject the method because it can be used badly. And that’s a bad argument. We should reject misuses of p values. But there’s a danger of blindly substituting alternative tools that throw out the error control baby with the bad statistics bathwater.

**Dan Jeske **05:58

**Thank you, Deborah, Jim, would you like to comment on Deborah’s remarks and offer your own?**

**Jim Berger **06:06

Okay, yes. Well, I certainly agree with much of what Deborah said, after all, a p value is simply a statistic. And it’s an interesting statistic that does have many legitimate uses, when properly calibrated. And Deborah mentioned one such case is model checking where Bayesians freely use some version of p values for model checking. You know, on the other hand, that one interprets this question, should they continue to be used in the same way that they’re used today? Then my, my answer would be somewhat different. I think p values are commonly misinterpreted today, especially when when they’re used to test a sharp null hypothesis. For instance, of a p value of .05, is commonly interpreted as by many is indicating the evidence is 20 to one in favor of the alternative hypothesis. And that just that just isn’t true. You can show for instance, that if I’m testing with a normal mean of zero versus nonzero, the odds of the alternative hypothesis to the null hypothesis can at most be seven to one. And that’s just a probabilistic fact, doesn’t involve priors or anything. It just is, is a is an answer covering all probability. And so that 20 to one cannot be if it’s, if it’s, if a p value of .05 is interpreted as 20 to one, it’s just, it’s just being interpreted wrongly, and the wrong conclusions are being reached. I’m reminded of an interesting paper that was published some time ago now, which was reporting on a survey that was designed to determine whether clinical practitioners understood what a p value was. The results of the survey were published and were not surprising. Most clinical practitioners interpreted the p value as something like a p value of .05 as something like 20 to one odds against the null hypothesis, which again, is incorrect. The fascinating aspect of the paper is that the authors also got it wrong. Deborah pointed out that the p value is the probability under the null hypothesis of the data or something more extreme. The author’s stated that the correct answer was the p value is the probability of the data under the null hypothesis, they forgot the more extreme. So, I love this article, because the scientists who set out to show that their colleagues did not understand the meaning of p values themselves did not understand the meaning of p values.

**Dan Jeske **08:42

**David?**

**David Trafimow **08:44

Okay. Yeah, Um, like Deborah and Jim, I’m delighted to be here. Thanks for the invitation. Um and I partly agree with what both Deborah and Jim said, um, it’s certainly true that people misuse p values. So, I agree with that. However, I think p values are more problematic than the other speakers have mentioned. And here’s here’s the problem for me. We keep talking about p values relative to hypotheses, but that’s not really true. P values are relative to hypotheses plus additional assumptions. So, if we call, if we use the term model to describe the null hypothesis, plus additional assumptions, then p values are based on models, not on hypotheses, or only partly on hypotheses. Now, here’s the thing. What are these other assumptions? An example would be random selection from the population, an assumption that is not true in any one of the thousands of papers I’ve read in psychology. And there are other assumptions, a lack of systematic error, linearity, and then we can go on and on, people have even published taxonomies of the assumptions because there are so many of them. See, it’s tantamount to impossible that the model is correct, which means that the model is wrong. And so, what you’re in essence doing then, is you’re using the p value to index evidence against a model that is already known to be wrong. And even the part about indexing evidence is questionable, but I’ll go with it for the moment. But the point is the model was wrong. And so, there’s no point in indexing evidence against it. So given that, I don’t really see that there’s any use for them. There’s, p values don’t tell you how close the model is to being right. P values don’t tell you how valuable the model is. P values pretty much don’t tell you anything that researchers might want to know, unless you misuse them. Anytime you draw a conclusion from a p value, you are guilty of misuse. So, I think the misuse problem is much more subtle than is perhaps obvious at firsthand. So, that’s really all I have to say at the moment.

**Dan Jeske **11:28

**Thank you. Jim, would you like to follow up?**

**Jim Berger **11:32

Yes, so, so, I certainly agree that that assumptions are often made that are wrong. I won’t say that that’s always the case. I mean, I know many scientific disciplines where I think they do a pretty good job, and work with high energy physicists, and they do a pretty good job of checking their assumptions. Excellent job. And they use p values. It’s something to watch out for. But any statistical analysis, you know, can can run into this problem. If the assumptions are wrong, it’s, it’s going to be wrong.

**Dan Jeske **12:09

**Deborah…**

**Deborah Mayo **12:11

Okay. Well, Jim thinks that we should evaluate the p value by looking at the Bayes factor when he does, and he finds that they’re exaggerating, but we really shouldn’t expect agreement on numbers from methods that are evaluating different things. This is like supposing that if we switch from a height to a weight standard, that if we use six feet with the height, we should now require six stone, to use an example from Stephen Senn. On David, I think he’s wrong about the worrying assumptions with using the p value since they have the least assumptions of any other method, which is why people and why even Bayesians will say we need to apply them when we need to test our assumptions. And it’s something that we can do, especially with randomized controlled trials, to get the assumptions to work. The idea that we have to misinterpret p values to have them be relevant, only rests on supposing that we need something other than what the p value provides.

**Dan Jeske **13:19

David, would you like to give some final thoughts on this question?

**David Trafimow **13:23

Sure. As it is, as far as Jim’s point, and Deborah’s point that we can do things to make the assumptions less wrong. The problem is the model is wrong or it isn’t wrong. Now if the model is close, that doesn’t justify the p value because the p value doesn’t give the closeness of the model. And that’s the, that’s the problem. We’re not we’re not using, for example, a sample mean, to estimate a population mean, in which case, yeah, you wouldn’t expect the sample mean to be exactly right. If it’s close, it’s still useful. The problem is that p values don’t tell you p values aren’t being used to estimate anything. So, if you’re not estimating anything, then you’re stuck with either correct or incorrect, and the answer is always incorrect that, you know, this is especially true in psychology, but I suspect it might even be true in physics. I’m not the physicist that Jim is. So, I can’t say that for sure.

**Dan Jeske **14:35

**Jim, would you like to offer Final Thoughts?**

**Jim Berger **14:37

Let me comment on Deborah’s comment about Bayes factors are just a different scale of measurement. My my point was that it seems like people invariably think of p values as something like odds or probability of the null hypothesis, if that’s the way they’re thinking, because that’s the way their minds reason. I believe we should provide them with odds. And so, I try to convert p values into odds or Bayes factors, because I think that’s much more readily understandable by people.

**Dan Jeske **15:11

Deborah, you have the final word on this question.

**Deborah Mayo **15:13

I do think that we need a proper philosophy of statistics to interpret p values. But I think also that what’s missing in the reject p values movement is a major reason for calling in statistics in science is to give us tools to inquire whether an observed phenomena can be a real effect, or just noise in the data and the P values have intrinsic properties for this task, if used properly, other methods don’t, and to reject them is to jeopardize this important role. As Fisher emphasizes, we need randomized control trials precisely to ensure the validity of statistical significance tests, to reject them because they don’t give us posterior probabilities is illicit. In fact, I think that those claims that we want such posteriors need to show for any way we can actually get them, why.

**You can watch the debate at the NISS website or in this blog post.**

**You can find the complete audio transcript at this LINK:** https://otter.ai/u/hFILxCOjz4QnaGLdzYFdIGxzdsg

[There is a play button at the bottom of the page that allows you to start and stop the recording. You can move about in the transcript/recording by using the pause button and moving the cursor to another place in the dialog and then clicking the play button to hear the recording from that point. (The recording is synced to the cursor.)]

## The P-Values Debate

## The Statistics Debate! (NISS DEBATE, October 15, Noon – 2 pm ET)

**October 15, Noon – 2 pm ET (Website)**

*Where do ***YOU **stand?

**YOU**stand?

Given the issues surrounding the misuses and abuse of p-values, do you think p-values should be used? Continue reading

## CALL FOR PAPERS (Synthese) Recent Issues in Philosophy of Statistics: Evidence, Testing, and Applications

**Call for Papers: **Topical Collection in *Synthese*

**Title:** Recent Issues in Philosophy of Statistics: Evidence, Testing, and Applications

**The deadline for submissions is **~~1 November, 2020 ~~1 December 2020

**Description:** Continue reading