If interested, write to me for the zoom link (firstname.lastname@example.org).
Monthly Archives: April 2021
The eighth meeting of our Phil Stat Forum*:
The Statistics Wars
and Their Casualties
22 April 2021
TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)
For information about the Phil Stat Wars forum and how to join, click on this link.
“How an information metric could bring truce to the statistics wars“
Daniele Fanelli Continue reading
I am reblogging a guest post that Aris Spanos wrote for this blog on Neyman’s birthday some years ago.
A Statistical Model as a Chance Mechanism
Jerzy Neyman (April 16, 1894 – August 5, 1981), was a Polish/American statistician[i] who spent most of his professional career at the University of California, Berkeley. Neyman is best known in statistics for his pioneering contributions in framing the Neyman-Pearson (N-P) optimal theory of hypothesis testing and his theory of Confidence Intervals. (This article was first posted here.) Continue reading
Today is Jerzy Neyman’s birthday (April 16, 1894 – August 5, 1981). I’m posting a link to a quirky paper of his that explains one of the most misunderstood of his positions–what he was opposed to in opposing the “inferential theory”. The paper is Neyman, J. (1962), ‘Two Breakthroughs in the Theory of Statistical Decision Making‘ [i] It’s chock full of ideas and arguments. “In the present paper” he tells us, “the term ‘inferential theory’…will be used to describe the attempts to solve the Bayes’ problem with a reference to confidence, beliefs, etc., through some supplementation …either a substitute a priori distribution [exemplified by the so called principle of insufficient reason] or a new measure of uncertainty” such as Fisher’s fiducial probability. It arises on p. 391 of Excursion 5 Tour III of Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). Here’s a link to the proofs of that entire tour. If you hear Neyman rejecting “inferential accounts” you have to understand it in this very specific way: he’s rejecting “new measures of confidence or diffidence”. Here he alludes to them as “easy ways out”. He is not rejecting statistical inference in favor of behavioral performance as typically thought. Neyman always distinguished his error statistical performance conception from Bayesian and Fiducial probabilisms [ii]. The surprising twist here is semantical and the culprit is none other than…Allan Birnbaum. Yet Birnbaum gets short shrift, and no mention is made of our favorite “breakthrough” (or did I miss it?). You can find quite a lot on this blog searching Birnbaum. Continue reading
Where do journal editors look to find someone to referee your manuscript (in the typical “double blind” review system in academic journals)? One obvious place to look is the reference list in your paper. After all, if you’ve cited them, they must know about the topic of your paper, putting them in a good position to write a useful review. The problem is that if your paper is on a topic of ardent disagreement, and you argue in favor of one side of the debates, then your reference list is likely to include those with actual or perceived conflicts of interest. After all, if someone has a strong standpoint on an issue of some controversy, and a strong interest in persuading others to accept their side, it creates an intellectual conflict of interest, if that person has power to uphold that view. Since your referee is in a position of significant power to do just that, it follows that they have a conflict of interest (COI). A lot of attention is paid to author’s conflicts of interest, but little into intellectual or ideological conflicts of interests of reviewers. At most, the concern is with the reviewer having special reasons to favor the author, usually thought to be indicated by having been a previous co-author. We’ve been talking about journal editors conflicts of interest as of late (e.g., with Mark Burgman’s presentation at the last Phil Stat Forum) and this brings to mind another one. Continue reading
The American Statistical Association has announced that it has decided to reverse course and share the recommendations developed by the ASA Task Force on Statistical Significance and Replicability in one of its official channels. The ASA Board created this group  in November 2019 “with a charge to develop thoughtful principles and practices that the ASA can endorse and share with scientists and journal editors.” (AMSTATNEWS 1 February 2020). Some members of the ASA Board felt that its earlier decision not to make these recommendations public, but instead to leave the group to publish its recommendations on its own, might give the appearance of a conflict of interest between the obligation of the ASA to represent the wide variety of methodologies used by its members in widely diverse fields, and the advocacy by some members who believe practitioners should stop using the term “statistical significance” and end the practice of using p-value thresholds in interpreting data [the Wasserstein et al. (2019) editorial]. I think that deciding to publicly share the new Task Force recommendations is very welcome, given especially that the Task Force was appointed to avoid just such an apparent conflict of interest. Past ASA President, Karen Kafadar noted: Continue reading