Remember “Repligate”? [“Some Ironies in the Replication Crisis in Social Psychology“] and, more recently, the much publicized attempt to replicate 100 published psychology articles by the Open Science Collaboration (OSC) [“The Paradox of Replication“]? Well, some of the critics involved in Repligate have just come out with a criticism of the OSC results, claiming they’re way, way off in their low estimate of replications in psychology . (The original OSC report is here.) I’ve only scanned the critical article quickly, but some bizarre statistical claims leap out at once. (Where do they get this notion about confidence intervals?) It’s published in Science! There’s also a response from the OSC researchers. Neither group adequately scrutinizes the validity of many of the artificial experiments and proxy variables–an issue I’ve been on about for a while. Without firming up the statistics-research link, no statistical fixes can help. I’m linking to the articles here for your weekend reading. I invite your comments! For some reason a whole bunch of items of interest, under the banner of “statistics and the replication crisis,” are all coming out at around the same time, and who can keep up? March 7 brings yet more! (Stay tuned).
My subtitle refers to my post alleging that non-replication articles are becoming so hot that non-significant results are the new significant results. Now we have another meta-level. So long as everyone’s getting published, who’s to complain, right? 
I’ll likely return to this once I’ve studied the articles–they’re quite short. Or, maybe readers can just share what they’ve found.
 Recall mention of one of the authors in the article cited in my earlier post on repligate:
Mr. Gilbert, a professor of psychology at Harvard University, … wrote that certain so-called replicators are “shameless little bullies” and “second stringers” who engage in tactics “out of Senator Joe McCarthy’s playbook” (he later took back the word “little,” writing that he didn’t know the size of the researchers involved).
What got Mr. Gilbert so incensed was the treatment of Simone Schnall, a senior lecturer at the University of Cambridge, whose 2008 paper on cleanliness and morality was selected for replication in a special issue of the journal Social Psychology.
Wilson was also mentioned.
 Never mind if there’s little if any progress in understanding the statistics or the phenomenon.
Gilbert, King, Pettigrew, Wilson (2016), “Comment on ‘Estimating the Reproducibility of psychological science'”and “Response”
OSC report: Estimating the Reproducibility of Psychological Science.
Other blog discussions on this (please add any you find in the comments).
- Uri Simonsohn on Data Colada Evaluating Replications: 40% Full ≠ 60% Empty, 3/3/16 post.
- Gelman’s blog: More on replication. 3/3/16 post
- Gelman’s blog: Replication crisis crisis 3/5/16
- Simine Vazire: On Sometimes I’m Wrong blog: is this what it sounds like when the doves cry?http://sometimesimwrong.typepad.com/wrong/2016/03/doves-cry.html …
- Sanjay Srivastava, on The Hardest Science blog: Evaluating a new critique of the Reproducibility Project
- Bishop blog:There is a reproducibility crisis in psychology and we need to act on it http://deevybee.blogspot.co.uk/2016/03/there-is-reproducibility-crisis-in.html
- Daniel Lakens http://daniellakens.blogspot.com/2016/03/the-statistical-conclusions-in-gilbert.html?spref=tw
The 20% statistician
- Nosek: Let’s not mischaracterize replication studies: authorsRetraction watch
The following references to the discussions of the OSC criticism are from Retraction Watch
- Monya Baker, at Nature, takes a look at the analysis: “Psychology’s reproducibility problem is exaggerated – say psychologists.”
- Benedict Carey does the same, at The New York Times.
- Slate’s Rachel Gross has detailed comments from Brian Nosek, who led the original replication effort.
- “Psychology Is in Crisis Over Whether It’s in Crisis,” Katie Palmer at WIRED writes. Palmer notes that Harvard’s Dan Gilbert, one of the authors of the Science article, who in the past has called replicators “shameless bullies,” hung up on her when she asked “if he thought his defensiveness might have colored his interpretation of this data.”
- The reason why many of the studies involved in the Reproducibility Project didn’t replicate? “Overestimation of effect sizes…due to small sample sizes and publication bias in the psychological literature,” says a new paper in PLOS ONE.
- Ed Yong weighs in at The Atlantic with “Psychology’s replication crisis can’t be wished away.”