Author Archives: Mayo

The P-Values Debate


 

National Institute of Statistical Sciences (NISS): The Statistics Debate (Video)

Categories: J. Berger, P-values, statistics debate | 7 Comments

The Statistics Debate! (NISS DEBATE, October 15, Noon – 2 pm ET)

October 15, Noon – 2 pm ET (Website)

Where do YOU stand?

Given the issues surrounding the misuses and abuse of p-values, do you think p-values should be used?

Do you think the use of estimation and confidence intervals eliminates the need for hypothesis tests?

Bayes Factors – are you for or against?

How should we address the reproducibility crisis?

If you are intrigued by these questions and have an interest in how these questions might be answered – one way of the other – then this is the event for you!

Want to get a sense of the thinking behind the practicality (or not) of various statistical approaches?  Interested in hearing both sides of the story – during the same session!?

This event will be held in a debate type of format. The participants will be given selected questions ahead of time so they have a chance to think about their responses, but this is intended to be much less of a presentation and more of a give and take between the debaters.

So – let’s have fun with this!  The best way to find out what happens is to register and attend!

Debate Host

Dan Jeske (University of California, Riverside)

Participants

Jim Berger (Duke University)
Deborah Mayo (Virginia Tech)
David Trafimow (New Mexico State University)

Register to Attend this Event Here!

Debate Host: Dan Jeske (University of California, Riverside) Participants: Jim Berger (Duke University), Deborah Mayo (Virginia Tech), David Trafimow (New Mexico State University).

Debate Host: Dan Jeske (University of California, Riverside) Participants: Jim Berger (Duke University), Deborah Mayo (Virginia Tech), David Trafimow (New Mexico State University).


Agenda

About the Participants

Dan Jeske (moderator) received MS and PhD degrees from the Department of Statistics at Iowa State University in 1982 and 1985, respectively. He was a distinguished member of technical staff, and a technical manager at AT&T Bell Laboratories between 1985-2003. Concurrent with those positions, he was a visiting part-time lecturer in the Department of Statistics at Rutgers University. Since 2003, he has been a faculty member in the Department of Statistics at the University of California, Riverside (UCR) serving as Chair of the department 2008-2015. He is currently the Vice Provost of Academic Personnel and the Vice Provost of Administrative Resolution at UCR. He is the Editor-in-Chief of The American Statistician, an elected Fellow of the American Statistical Association, an Elected Member of the International Statistical Institute, and is President-elect of the International Society for Statistics in Business and Industry.. He has published over 100 peer-reviewed journal articles and is a co-inventor on 10 U.S. Patents. He served a 3-year term on the Board of Directors of ASA in 2013-2015.

Jim Berger is the Arts and Sciences Professor of Statistics at Duke University. His current research interests include Bayesian model uncertainty and uncertainty quantification for complex computer models. Berger was president of the Institute of Mathematical Statistics from 1995-1996 and of the International Society for Bayesian Analysis during 2004. He was the founding director of the Statistical and Applied Mathematical Sciences Institute, serving from 2002-2010. He was co-editor of the Annals of Statistics from 1998-2000 and was a founding editor of the Journal on Uncertainty Quantification from 2012-2015. Berger received the COPSS `President’s Award’ in 1985, was the Fisher Lecturer in 2001, the Wald Lecturer of the IMS in 2007, and received the Wilks Award from the ASA in 2015. He was elected as a foreign member of the Spanish Real Academia de Ciencias in 2002, elected to the USA National Academy of Sciences in 2003, was awarded an honorary Doctor of Science degree from Purdue University in 2004, and became an Honorary Professor at East China Normal University in 2011.

Deborah G. Mayo is professor emerita in the Department of Philosophy at Virginia Tech. Her Error and the Growth of Experimental Knowledge won the 1998 Lakatos Prize in philosophy of science. She is a research associate at the London School of Economics: Centre for the Philosophy of Natural and Social Science (CPNSS). She co-edited (with A. Spanos) Error and Inference (2010, CUP). Her most recent book is Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). She founded the Fund for Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (E.R.R.O.R) which sponsored a 2 week summer seminar in Philosophy of Statistics in 2019 for 15 faculty in philosophy, psychology, statistics, law and computer science (co-directed with A. Spanos). She publishes widely in philosophy of science, statistics, and philosophy of experiment. She blogs at errorstatistics.com and phil-stat-wars.com.

Click to access statistical-inference-as-severe-testing_flyer.pdf

David Trafimow is Professor in the Department of Psychology at the New Mexico State University.  His research area is social psychology.  In particular his research looks at social cognition especially in understanding how self-cognitions are organized, and the interrelations between self-cognitions and presumed determinants of behavior (e.g., attitudes, subjective norms, control beliefs, and behavioral intentions). His research interests include cognitive structures and processes underlying attributions and memory for events and persons. Additionally, he is also involved in methodological, statistical, and philosophical issues pertaining to science.

EVENT TYPE

HOST

National Institute of Statistical Sciences

SPONSOR

National Institute of Statistical Sciences

LOCATION

Online Webinar “Debate”
Categories: Announcement, J. Berger, P-values, Philosophy of Statistics, reproducibility, statistical significance tests, Statistics | Tags: | 6 Comments

CALL FOR PAPERS (Synthese) Recent Issues in Philosophy of Statistics: Evidence, Testing, and Applications

.

Call for Papers: Topical Collection in Synthese

Title: Recent Issues in Philosophy of Statistics: Evidence, Testing, and Applications

The deadline for submissions is 1 November, 2020 1 December 2020

Description:

Statistics play an essential role in an extremely wide range of human reasoning. From theorizing in the physical and social sciences to determining evidential standards in legal contexts, statistical methods are ubiquitous, and questions about their proper application inevitably arise. As tools for making inferences that go beyond a given set of data, they are inherently a means of reasoning ampliatively, and so it is unsurprising that philosophers interested in the notions of evidence and inductive inference have been concerned to utilize statistical frameworks to further our understanding of these topics. The purpose of this volume is to present a cross-section of subjects related to statistical argumentation, written by scholars from a variety of fields in order to explore issues in philosophy of statistics from different perspectives. Here, we intend for “Philosophy of Statistics” to be broadly construed.  This volume will thus include discussions of foundational issues in statistics, as well as questions having to do with evidence, induction, and confirmation as applied in various contexts.

Appropriate topics for submission include, among others:

  • Analyses and critiques of particular statistical concepts and practices
  • Methods in “statistical forensics” whose goal is to shed light on whether a body of research is trustworthy
  • Statistics as related to topics such as causal inference and idealization
  • Analyses of the evidential status of statistical arguments in the law, grounded in practical cases
  • Philosophically motivated conceptions of evidence
  • Issues in data science, psychology, and medical epistemology

For further information, please contact the guest editor(s): molly.kao@umontreal.ca; eshech@auburn.edu

See: https://philevents.org/event/show/83126

Journal: Synthese

Guest Editor(s):

Molly Kao, University of Montreal
Deborah Mayo, Virginia Tech
Elay Shech, Auburn University

 

 

Categories: Announcement, CFP, Synthese | Leave a comment

G.A. Barnard’s 105th Birthday: The Bayesian “catch-all” factor: probability vs likelihood

barnard-1979-picture

G. A. Barnard: 23 Sept 1915-30 July, 2002

Yesterday was statistician George Barnard’s 105th birthday. To acknowledge it, I reblog an exchange between Barnard, Savage (and others) on likelihood vs probability. The exchange is from pp 79-84 (of what I call) “The Savage Forum” (Savage, 1962).[i] A portion appears on p. 420 of my Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). Six other posts on Barnard are linked below, including 2 guest posts, (Senn, Spanos); a play (pertaining to our first meeting), and a letter Barnard wrote to me in 1999. 

 ♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠♠

BARNARD:…Professor Savage, as I understand him, said earlier that a difference between likelihoods and probabilities was that probabilities would normalize because they integrate to one, whereas likelihoods will not. Now probabilities integrate to one only if all possibilities are taken into account. This requires in its application to the probability of hypotheses that we should be in a position to enumerate all possible hypotheses which might explain a given set of data. Now I think it is just not true that we ever can enumerate all possible hypotheses. … If this is so we ought to allow that in addition to the hypotheses that we really consider we should allow something that we had not thought of yet, and of course as soon as we do this we lose the normalizing factor of the probability, and from that point of view probability has no advantage over likelihood. This is my general point, that I think while I agree with a lot of the technical points, I would prefer that this is talked about in terms of likelihood rather than probability. I should like to ask what Professor Savage thinks about that, whether he thinks that the necessity to enumerate hypotheses exhaustively, is important.

SAVAGE: Surely, as you say, we cannot always enumerate hypotheses so completely as we like to think. The list can, however, always be completed by tacking on a catch-all ‘something else’. In principle, a person will have probabilities given ‘something else’ just as he has probabilities given other hypotheses. In practice, the probability of a specified datum given ‘something else’ is likely to be particularly vague­–an unpleasant reality. The probability of ‘something else’ is also meaningful of course, and usually, though perhaps poorly defined, it is definitely very small. Looking at things this way, I do not find probabilities unnormalizable, certainly not altogether unnormalizable.

Whether probability has an advantage over likelihood seems to me like the question whether volts have an advantage over amperes. The meaninglessness of a norm for likelihood is for me a symptom of the great difference between likelihood and probability. Since you question that symptom, I shall mention one or two others. …

On the more general aspect of the enumeration of all possible hypotheses, I certainly agree that the danger of losing serendipity by binding oneself to an over-rigid model is one against which we cannot be too alert. We must not pretend to have enumerated all the hypotheses in some simple and artificial enumeration that actually excludes some of them. The list can however be completed, as I have said, by adding a general ‘something else’ hypothesis, and this will be quite workable, provided you can tell yourself in good faith that ‘something else’ is rather improbable. The ‘something else’ hypothesis does not seem to make it any more meaningful to use likelihood for probability than to use volts for amperes.

Let us consider an example. Off hand, one might think it quite an acceptable scientific question to ask, ‘What is the melting point of californium?’ Such a question is, in effect, a list of alternatives that pretends to be exhaustive. But, even specifying which isotope of californium is referred to and the pressure at which the melting point is wanted, there are alternatives that the question tends to hide. It is possible that californium sublimates without melting or that it behaves like glass. Who dare say what other alternatives might obtain? An attempt to measure the melting point of californium might, if we are serendipitous, lead to more or less evidence that the concept of melting point is not directly applicable to it. Whether this happens or not, Bayes’s theorem will yield a posterior probability distribution for the melting point given that there really is one, based on the corresponding prior conditional probability and on the likelihood of the observed reading of the thermometer as a function of each possible melting point. Neither the prior probability that there is no melting point, nor the likelihood for the observed reading as a function of hypotheses alternative to that of the existence of a melting point enter the calculation. The distinction between likelihood and probability seems clear in this problem, as in any other.

BARNARD: Professor Savage says in effect, ‘add at the bottom of list H1, H2,…”something else”’. But what is the probability that a penny comes up heads given the hypothesis ‘something else’. We do not know. What one requires for this purpose is not just that there should be some hypotheses, but that they should enable you to compute probabilities for the data, and that requires very well defined hypotheses. For the purpose of applications, I do not think it is enough to consider only the conditional posterior distributions mentioned by Professor Savage.

LINDLEY: I am surprised at what seems to me an obvious red herring that Professor Barnard has drawn across the discussion of hypotheses. I would have thought that when one says this posterior distribution is such and such, all it means is that among the hypotheses that have been suggested the relevant probabilities are such and such; conditionally on the fact that there is nothing new, here is the posterior distribution. If somebody comes along tomorrow with a brilliant new hypothesis, well of course we bring it in.

BARTLETT: But you would be inconsistent because your prior probability would be zero one day and non-zero another.

LINDLEY: No, it is not zero. My prior probability for other hypotheses may be ε. All I am saying is that conditionally on the other 1 – ε, the distribution is as it is.

BARNARD: Yes, but your normalization factor is now determined by ε. Of course ε may be anything up to 1. Choice of letter has an emotional significance.

LINDLEY: I do not care what it is as long as it is not one.

BARNARD: In that event two things happen. One is that the normalization has gone west, and hence also this alleged advantage over likelihood. Secondly, you are not in a position to say that the posterior probability which you attach to an hypothesis from an experiment with these unspecified alternatives is in any way comparable with another probability attached to another hypothesis from another experiment with another set of possibly unspecified alternatives. This is the difficulty over likelihood. Likelihood in one class of experiments may not be comparable to likelihood from another class of experiments, because of differences of metric and all sorts of other differences. But I think that you are in exactly the same difficulty with conditional probabilities just because they are conditional on your having thought of a certain set of alternatives. It is not rational in other words. Suppose I come out with a probability of a third that the penny is unbiased, having considered a certain set of alternatives. Now I do another experiment on another penny and I come out of that case with the probability one third that it is unbiased, having considered yet another set of alternatives. There is no reason why I should agree or disagree in my final action or inference in the two cases. I can do one thing in one case and other in another, because they represent conditional probabilities leaving aside possibly different events.

LINDLEY: All probabilities are conditional.

BARNARD: I agree.

LINDLEY: If there are only conditional ones, what is the point at issue?

PROFESSOR E.S. PEARSON: I suggest that you start by knowing perfectly well that they are conditional and when you come to the answer you forget about it.

BARNARD: The difficulty is that you are suggesting the use of probability for inference, and this makes us able to compare different sets of evidence. Now you can only compare probabilities on different sets of evidence if those probabilities are conditional on the same set of assumptions. If they are not conditional on the same set of assumptions they are not necessarily in any way comparable.

LINDLEY: Yes, if this probability is a third conditional on that, and if a second probability is a third, conditional on something else, a third still means the same thing. I would be prepared to take my bets at 2 to 1.

BARNARD: Only if you knew that the condition was true, but you do not.

GOOD: Make a conditional bet.

BARNARD: You can make a conditional bet, but that is not what we are aiming at.

WINSTEN: You are making a cross comparison where you do not really want to, if you have got different sets of initial experiments. One does not want to be driven into a situation where one has to say that everything with a probability of a third has an equal degree of credence. I think this is what Professor Barnard has really said.

BARNARD: It seems to me that likelihood would tell you that you lay 2 to 1 in favour of H1 against H2, and the conditional probabilities would be exactly the same. Likelihood will not tell you what odds you should lay in favour of H1 as against the rest of the universe. Probability claims to do that, and it is the only thing that probability can do that likelihood cannot.

You can read the rest of pages 78-103 of the Savage Forum here.

 HAPPY BIRTHDAY GEORGE!

References

[i] Savage, L. (1962), “Discussion”, in The Foundations of Statistical Inference: A Discussion, (G. A. Barnard and D. R. Cox eds.), London: Methuen, 76.
 
 

 

Categories: Barnard, phil/history of stat, Statistics | 10 Comments

Live Exhibit: Bayes Factors & Those 6 ASA P-value Principles

.

Live Exhibit: So what happens if you replace “p-values” with “Bayes Factors” in the 6 principles from the 2016 American Statistical Association (ASA) Statement on P-values? (Remove “or statistical significance” in question 5.)

Does the one positive assertion hold? Are the 5 “don’ts” true?

 

  1. P-values can indicate how incompatible the data are with a specified statistical model.
  2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
  3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
  4. Proper inference requires full reporting and transparency. p-values and related analyses should not be reported selectively. Conducting multiple analyses of the data and reporting only those with certain p-values (typically those passing a significance threshold) renders the reported p-values essentially uninterpretable.
  5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
  6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.

I will hold off saying what I think until our Phil Stat forum (Phil Stat Wars and Their Casualties) on Thursday [1], although anyone who has read Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars [SIST] (CUP, 2019) will have a pretty good idea. You can read the relevant sections 4.5 and 4.6 in proof form. In SIST, I called examples “exhibits”, and examples the reader is invited to work through are called “live exhibits”. That’s because the whole book involves tours through statistical museums.

 

What do you think?

 

[1] For my general take on the meaning of the theme, see Statistical Crises and Their Casualties.

Selected blog posts on the 2016 ASA Statement on P-values & the Wasserstein et al. March 2019 supplement to The American Statistician 2019 editorial:

  • March 7, 2016: “Don’t Throw Out the Error Control Baby With the Bad Statistics Bathwater”
  • March 25, 2019: “Diary for Statistical War Correspondents on the Latest Ban on Speech.”
  • June 17, 2019: “The 2019 ASA Guide to P-values and Statistical Significance: Don’t Say What You Don’t Mean” (Some Recommendations)(ii)
  • July 19, 2019: The NEJM Issues New Guidelines on Statistical Reporting: Is the ASA P-Value Project Backfiring? (i)
  • September 19, 2019: (Excerpts from) ‘P-Value Thresholds: Forfeit at Your Peril’ (free access). The article by Hardwicke and Ioannidis (2019), and the editorials by Gelman and by me are linked on this post. My article is P-value Thresholds: Forfeit at your Peril.
  • November 4, 2019:On some Self-defeating aspects of the ASA’s 2019 recommendations of statistical significance tests
  • November 14, 2019: The ASA’s P-value Project: Why it’s Doing More Harm than Good (cont from 11/4/19)
  • November 30, 2019: P-Value Statements and Their Unintended(?) Consequences: The June 2019 ASA President’s Corner (b)
  • Les Stats C’est Moi: We Take That Step Here!
Categories: ASA Guide to P-values, bayes factors | 2 Comments

September 24: Bayes factors from all sides: who’s worried, who’s not, and why (R. Morey)

Information and directions for joining our forum are here.

 

R. Morey’s slides “Bayes Factors from all sides: who’s worried, who’s not, and why” are at this link: https://richarddmorey.github.io/TalkPhilStat2020/#1

Upcoming talks will include Stephen Senn (Statistical consultant, Scotland, November 19, 2020); Deborah Mayo (Philosophy, Virginia Tech, December 19, 2020); and Alexander Bird (Philosophy, King’s College London, January 28, 2021).  https://phil-stat-wars.com/schedule/.

In October, instead of our monthly meeting, I invite you to a P-value debate on October 15 sponsored by the National Institute of Statistical Science, with J. Berger, D. Mayo, and D. Trafimow. Register at https://www.niss.org/events/statistics-debate.

 

Categories: Announcement, bayes factors, Error Statistics, Phil Stat Forum, Richard Morey | 1 Comment

All She Wrote (so far): Error Statistics Philosophy: 9 years on

Dear Reader: I began this blog 9 years ago (Sept. 3, 2011)! A double celebration is taking place at the Elbar Room tonight (a smaller one was held earlier in the week), both for the blog and the 2 year anniversary of the physical appearance of my book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars [SIST] (CUP, 2018). A special rush edition made an appearance on Sept 3, 2018 in time for the RSS meeting in Cardiff. If you’re in the neighborhood, stop by for some Elba Grease.

.

Many of the discussions in the book were importantly influenced (corrected and improved) by reader’s comments on the blog over the years. I posted several excerpts and mementos from SIST here. I thank readers for their input. Readers should look up the topics in SIST on this blog to check out the comments, and see how ideas were developed, corrected and turned into “excursions” in SIST.

In the summer of 2019, A. Spanos and I led a Summer Seminar in Phil Stat at Virginia Tech for 15 faculty members from around the world in philosophy, psychology, and statistics. A write up is here.

This past summer (May 21-June 18), I ran a virtual LSE PH500 seminar on Current Controversies in Phil Stat.

Please peruse the 9 years of offerings below, taking advantage of the discussions by guest posters and readers. Continue reading

Categories: blog contents, Metablog | Leave a comment

5 September, 2018 (w/updates) RSS 2018 – Significance Tests: Rethinking the Controversy

.

Day 2, Wed 5th September, 2018:

The 2018 Meeting of the Royal Statistical Society (Cardiff)

11:20 – 13:20

Keynote 4 – Significance Tests: Rethinking the Controversy Assembly Room

Speakers:
Sir David Cox, Nuffield College, Oxford
Deborah Mayo, Virginia Tech
Richard Morey, Cardiff University
Aris Spanos, Virginia Tech

Intermingled in today’s statistical controversies are some long-standing, but unresolved, disagreements on the nature and principles of statistical methods and the roles for probability in statistical inference and modelling. In reaction to the so-called “replication crisis” in the sciences, some reformers suggest significance tests as a major culprit. To understand the ramifications of the proposed reforms, there is a pressing need for a deeper understanding of the source of the problems in the sciences and a balanced critique of the alternative methods being proposed to supplant significance tests. In this session speakers offer perspectives on significance tests from statistical science, econometrics, experimental psychology and philosophy of science. There will be also be panel discussion.

5 Sept. 2018 (taken by A.Spanos)

Continue reading

Categories: Error Statistics | Tags: | Leave a comment

The Physical Reality of My New Book! Here at the RSS Meeting (2 years ago)

.

You can find several excerpts and mementos from the book, including whole “tours” (in proofs) updated June 2020 here.

Categories: SIST | Leave a comment

Statistical Crises and Their Casualties–what are they?

What do I mean by “The Statistics Wars and Their Casualties”? It is the title of the workshop I have been organizing with Roman Frigg at the London School of Economics (CPNSS) [1], which was to have happened in June. It is now the title of a forum I am zooming on Phil Stat that I hope you will want to follow. It’s time that I explain and explore some of the key facets I have in mind with this title. Continue reading

Categories: Error Statistics | 4 Comments

New Forum on The Statistics Wars & Their Casualties: August 20, Preregistration (D. Lakens)

I will now hold a monthly remote forum on Phil Stat: The Statistics Wars and Their Casualties–the title of the workshop I had scheduled to hold at the London School of Economics (Centre for Philosophy of Natural and Social Science: CPNSS) on 19-20 June 2020. (See the announcement at the bottom of this blog). I held the graduate seminar in Philosophy (PH500) that was to precede the workshop remotely (from May 21-June 25), and this new forum will be both an extension of that and a linkage to the planned workshop. The issues are too pressing to put off for a future in-person workshop, which I still hope to hold. It will begin with presentations by workshop participants, with lots of discussion. If you want to be part of this monthly forum and engage with us, please go to the information and directions page. The links are now fixed, sorry. (It also includes readings for Aug 20.)  If you are already on our list, you’ll automatically be notified of new meetings. (If you have questions, email me.) Continue reading

Categories: Announcement | Leave a comment

August 6: JSM 2020 Panel on P-values & “Statistical Significance”

SLIDES FROM MY PRESENTATION

July 30 PRACTICE VIDEO for JSM talk (All materials for Practice JSM session here)

JSM 2020 Panel Flyer (PDF)
JSM online program w/panel abstract & information):

Categories: ASA Guide to P-values, Error Statistics, evidence-based policy, JSM 2020, P-values, Philosophy of Statistics, science communication, significance tests | 3 Comments

JSM 2020 Panel on P-values & “Statistical Significance”

All: On July 30 (10am EST) I will give a virtual version of my JSM presentation, remotely like the one I will actually give on Aug 6 at the JSM. Co-panelist Stan Young may as well. One of our surprise guests tomorrow (not at the JSM) will be Yoav Benjamini!  If you’re interested in attending our July 30 practice session* please follow the directions here. Background items for this session are in the “readings” and “memos” of session 5.

*unless you’re already on our LSE Phil500 list

JSM 2020 Panel Flyer (PDF)
JSM online program w/panel abstract & information): Continue reading

Categories: Announcement, JSM 2020, significance tests, stat wars and their casualties | Leave a comment

Stephen Senn: Losing Control (guest post)

.

Stephen Senn
Consultant Statistician
Edinburgh

Losing Control

Match points

The idea of local control is fundamental to the design and analysis of experiments and contributes greatly to a design’s efficiency. In clinical trials such control is often accompanied by randomisation and the way that the randomisation is carried out has a close relationship to how the analysis should proceed. For example, if a parallel group trial is carried out in different centres, but randomisation is ‘blocked’ by centre then, logically, centre should be in the model (Senn, S. J. & Lewis, R. J., 2019). On the other hand if all the patients in a given centre are allocated the same treatment at random, as in a so-called cluster randomised trial, then the fundamental unit of inference becomes the centre and patients are regarded as repeated measures on it. In other words, the way in which the allocation has been carried out effects the degree of matching that has been achieved and this, in turn, is related to the analysis that should be employed. A previous blog of mine, To Infinity and Beyond,  discusses the point. Continue reading

Categories: covid-19, randomization, RCTs, S. Senn | 14 Comments

JSM 2020: P-values & “Statistical Significance”, August 6


Link: https://ww2.amstat.org/meetings/jsm/2020/onlineprogram/ActivityDetails.cfm?SessionID=219596

To register for JSM: https://ww2.amstat.org/meetings/jsm/2020/registration.cfm

Categories: JSM 2020, P-values | Leave a comment

Colleges & Covid-19: Time to Start Pool Testing

.

I. “Colleges Face Rising Revolt by Professors,” proclaims an article in today’s New York Times, in relation to returning to in-person teaching:

Thousands of instructors at American colleges and universities have told administrators in recent days that they are unwilling to resume in-person classes because of the pandemic. More than three-quarters of colleges and universities have decided students can return to campus this fall. But they face a growing faculty revolt.
Continue reading

Categories: covid-19 | Tags: | 8 Comments

David Hand: Trustworthiness of Statistical Analysis (LSE PH 500 presentation)

This was David Hand’s guest presentation (25 June) at our zoomed graduate research seminar (LSE PH500) on Current Controversies in Phil Stat (~30 min.)  I’ll make some remarks in the comments, and invite yours.

.

Trustworthiness of Statistical Analysis

David Hand

Abstract: Trust in statistical conclusions derives from the trustworthiness of the data and analysis methods. Trustworthiness of the analysis methods can be compromised by misunderstanding and incorrect application. However, that should stimulate a call for education and regulation, to ensure that methods are used correctly. The alternative of banning potentially useful methods, on the grounds that they are often misunderstood and misused is short-sighted, unscientific, and Procrustean. It damages the capability of science to advance, and feeds into public mistrust of the discipline.

Below are Prof.Hand’s slides w/o audio, followed by a video w/audio. You can also view them on the Meeting #6 post on the PhilStatWars blog (https://phil-stat-wars.com/2020/06/21/meeting-6-june-25/). Continue reading

Categories: LSE PH 500 | Tags: , , , , , , | 7 Comments

Bonus meeting: Graduate Research Seminar: Current Controversies in Phil Stat: LSE PH 500: 25 June 2020

Ship StatInfasSt

We’re holding a bonus, 6th, meeting of the graduate research seminar PH500 for the Philosophy, Logic & Scientific Method Department at the LSE:

(Remote 10am-12 EST, 15:00 – 17:00 London time; Thursday, June 25)

VI. (June 25) BONUS: Power, shpower, severity, positive predictive value (diagnostic model) & a Continuation of The Statistics Wars and Their Casualties

There will also be a guest speaker: Professor David Hand (Imperial College, London). Here is Professor Hand’s presentation (click on “present” to hear sound)

The main readings are on the blog page for the seminar.

 

Categories: Graduate Seminar PH500 LSE, power | Leave a comment

“On the Importance of testing a random sample (for Covid)”, an article from Significance magazine

.

Nearly 3 months ago I tweeted “Stat people: shouldn’t they be testing a largish random sample of people [w/o symptoms] to assess rates, alert those infected, rather than only high risk, symptomatic people, in the U.S.?” I was surprised that nearly all the stat and medical people I know expressed the view that it wouldn’t be feasible or even very informative. Really? Granted, testing was and is limited, but had it been made a priority, it could have been done. In the new issue of Significance (June 2020) that I just received, James J. Cochran writes “on the importance of testing a random sample.” [1] 

Continue reading

Categories: random sample | 13 Comments

Birthday of Allan Birnbaum: Foundations of Probability and Statistics (27 May 1923 – 1 July 1976)

27 May 1923-1 July 1976

27 May 1923-1 July 1976

Today is Allan Birnbaum’s birthday. In honor of his birthday, I’m posting the articles in the Synthese volume that was dedicated to his memory in 1977. The editors describe it as their way of  “paying homage to Professor Birnbaum’s penetrating and stimulating work on the foundations of statistics”. I had posted the volume before, but there are several articles that are very worth rereading. I paste a few snippets from the articles by Giere and Birnbaum. If you’re interested in statistical foundations, and are unfamiliar with Birnbaum, here’s a chance to catch up. (Even if you are, you may be unaware of some of these key papers.) Continue reading

Categories: Birnbaum, Likelihood Principle, Statistics, strong likelihood principle | Tags: | 3 Comments

Blog at WordPress.com.