The Guardian article answers yes to the question “Do ‘hard’ sciences hold the solution…“:
Psychology is evolving faster than ever. For decades now, many areas in psychology have relied on what academics call “questionable research practices” – a comfortable euphemism for types of malpractice that distort science but which fall short of the blackest of frauds, fabricating data. Continue reading →
Sir David Hendry, Professor of Economics at the University of Oxford , was given the Celebrating Impact Lifetime Achievement Award on June 8, 2014. Professor Hendry presented his automatic model selection program (Autometrics) at our conference, Statistical Science and Philosophy of Science (June, 2010) (Site is here.) I’m posting an interesting video and related links. I invite comments on the paper Hendry published, “Empirical Economic Model Discovery and Theory Evaluation,” in our special volume of Rationality, Markets, and Morals (abstract below). 
One of the world’s leading economists, INET Oxford’s Prof. Sir David Hendry received a unique award from the Economic and Social Research Council (ESRC)… Continue reading →
“Wonderful examples, but let’s not close our eyes,” is David J. Hand’s apt title for his discussion of the recent special issue (Feb 2014) of Statistical Science called “Big Bayes Stories” (edited by Sharon McGrayne, Kerrie Mengersen and Christian Robert.) For your Saturday night/ weekend reading, here are excerpts from Hand, another discussant (Welsh), scattered remarks of mine, along with links to papers and background. I begin with David Hand:
[The papers in this collection] give examples of problems which are well-suited to being tackled using such methods, but one must not lose sight of the merits of having multiple different strategies and tools in one’s inferential armory.(Hand )_
…. But I have to ask, is the emphasis on ‘Bayesian’ necessary? That is, do we need further demonstrations aimed at promoting the merits of Bayesian methods? …The examples in this special issue were selected, firstly by the authors, who decided what to write about, and then, secondly, by the editors, in deciding the extent to which the articles conformed to their desiderata of being Bayesian success stories: that they ‘present actual data processing stories where a non-Bayesian solution would have failed or produced sub-optimal results.’ In a way I think this is unfortunate. I am certainly convinced of the power of Bayesian inference for tackling many problems, but the generality and power of the method is not really demonstrated by a collection specifically selected on the grounds that this approach works and others fail. To take just one example, choosing problems which would be difficult to attack using the Neyman-Pearson hypothesis testing strategy would not be a convincing demonstration of a weakness of that approach if those problems lay outside the class that that approach was designed to attack.
Hand goes on to make a philosophical assumption that might well be questioned by Bayesians: Continue reading →
Four score years ago (!) we held the conference “Statistical Science and Philosophy of Science: Where Do (Should) They meet?” at the London School of Economics, Center for the Philosophy of Natural and Social Science, CPNSS, where I’m visiting professor  Many of the discussions on this blog grew out of contributions from the conference, and conversations initiated soon after. The conference site is here; my paper on the general question is here.
My main contribution was “Statistical Science Meets Philosophy of Science Part 2: Shallow versus Deep Explorations” SS & POS 2. It begins like this:
1. Comedy Hour at the Bayesian Retreat
Overheard at the comedy hour at the Bayesian retreat: Did you hear the one about the frequentist… Continue reading →
Wilson E. Schmidt Professor of Economics Department of Economics, Virginia Tech
Recurring controversies about P values and conﬁdence intervals revisited* Ecological Society of America (ESA) ECOLOGY Forum—P Values and Model Selection (pp. 609-654)
Volume 95, Issue 3 (March 2014): pp. 645-651
The use, abuse, interpretations and reinterpretations of the notion of a P value has been a hot topic of controversy since the 1950s in statistics and several applied ﬁelds, including psychology, sociology, ecology, medicine, and economics.
The initial controversy between Fisher’s signiﬁcance testing and the Neyman and Pearson (N-P; 1933) hypothesis testing concerned the extent to which the pre-data Type I error probability α can address the arbitrariness and potential abuse of Fisher’s post-data threshold for the P value. Continue reading →
“The medical press must become irrelevant to publication of clinical trials.” So said Stephen Senn at a recent meeting of the Medical Journalists’ Association with the title: “Is the current system of publishing clinical trials fit for purpose?” Senn has thrown a few stones in the direction of medical journals in guest posts on this blog, and in this paper, but it’s the first I heard him go this far. He wasn’t the only one answering the conference question “No!” much to the surprise of medical journalist Jane Feinmann, whose article I am excerpting:
So what happened? Medical journals, the main vehicles for publishing clinical trials today, are after all the ‘gatekeepers of medical evidence’—as they are described in Bad Pharma, Ben Goldacre’s 2012 bestseller. …
Blood Simple? The complicated and controversial world of bioequivalence
by Stephen Senn*
Those not familiar with drug development might suppose that showing that a new pharmaceutical formulation (say a generic drug) is equivalent to a formulation that has a licence (say a brand name drug) ought to be simple. However, it can often turn out to be bafflingly difficult. Continue reading →