Tom Kepler’s guest post arose in connection with my November 9 post & comments.
Professor Thomas B. Kepler
Department of Microbiology
Department of Mathematics & Statistics
Boston University School of Medicine
There is much to say about the article in the Economist, but the first is to note that it is far more balanced than its sensational headline promises. Promising to throw open the curtain on “Unreliable research” is mere click-bait for the science-averse readers who have recently found validation against their intellectual insecurities in the populist uprising against the shadowy world of the scientist. What with the East Anglia conspiracy, and so on, there’s no such thing as “too skeptical” when it comes to science.
There is some remarkably casual reporting in an article that purports to be concerned with mechanisms to assure that inaccuracies not be perpetuated.
For example, the authors cite the comment in Nature by Begley and Ellis and summarize it thus: …scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. Stan Young, in his comments to Mayo’s blog adds, “These claims can not be replicated – even by the original investigators! Stop and think of that.” But in fact the role of the original investigators is described as follows in Begley and Ellis: “…when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors’ direction, occasionally even in the laboratory of the original investigator.” (Emphasis added.) Now, please stop and think about what agenda is served by eliding the tempered language of the original.
Both the Begley and Ellis comment and the brief correspondence by Prinz et al. also cited in this discussion are about laboratories in commercial pharmaceutical companies failing to reproduce experimental results. While deciding how to interpret their findings, it would be prudent to bear in mind the insight from Harry Collins, the sociologist of science paraphrased in the Economist piece as indicating that “performing an experiment always entails what sociologists call “tacit knowledge”—craft skills and extemporisations that their possessors take for granted but can pass on only through example. Thus if a replication fails, it could be because the repeaters didn’t quite get these je-ne-sais-quoi bits of the protocol right.” Indeed, I would go further and conjecture that few experimental biologists would hold out hope that any one laboratory could claim the expertise necessary to reproduce the results of 53 ground-breaking papers in diverse specialties, even within cancer drug discovery. And to those who are unhappy that authors often do not comply with the journals’ clear policy of data-sharing, how do you suppose you would fare getting such data from the pharmaceutical companies that wrote these damning papers? Or the authors of the papers themselves? Nature had to clarify, writing two months after the publication of Begley and Ellis, “Nature, like most journals, requires authors of research papers to make their data available on request. In this less formal Comment, we chose not to enforce this requirement so that Begley and Ellis could abide by the legal agreements [they made with the original authors].” There seems to be a good reason that the data are not being provided, but it does make pursuing the usual self-corrective course of science ironically unavailable. Furthermore, one might be persuaded to grant the benefit of the doubt beyond this one case to authors who don’t respond to demands for all their data and metadata immediately. They, too, may have reasons (other than concealing ineptitude) for their failure to respond to requests fast enough to satisfy the requestor.
I agree that there are problems with the way science is done, and that serious attention must be paid to making its practice more efficient and fairer to its practitioners. There is much to be gained by reforming peer-review, for example, and a great deal of progress is being made. The hyper-competitive atmosphere of contemporary science and the attendant implicit directive to value speed over reliability is deeply problematic and unfair to many thoughtful young scientists. (I have often been frustrated at others’ scooping me while using flawed analyses. What was frustrating in these cases was not that they were wrong, but that they were right, in spite of the naiveté of their methods.) The industrialization of science and the growth of “team science” threaten to exacerbate the very real problems of elevating a small number of elite PIs to mythic status at the expense of many very good people. Indeed, the self-corrective nature of science is, unfortunate as it may be, strictly impersonal. The scientific method does not provide assurance that its practitioner will be treated justly and fairly. At the same time, I do not believe that fomenting rebellion (Stan Young’s comment: “Why should the taxpayer fund such an unreliable enterprise?”) is going to be a productive strategy.
The problem is that the non-practicing public–even the very well-educated–have an oversimplified conception of how science works. It is not the case that there is a finite number of propositions that make up the instantaneous canon, and that this set of common beliefs grows and shrinks through the publication of experimental results. As we all know too well, the condition is far messier than that. Mistaken results and bad theories are not typically dispatched with a single fatal blow, but instead die through simple neglect. Perhaps it is lamentable that “to outsiders they will appear part of the scientific canon”, as the Economist opines, but that is simply not relevant as a measure of the ability of the scientific enterprise to self-correct. The reputations of poor scientists may survive longer than they should but poor ideas are dealt with very effectively.
Where I agree strongly with the anonymous authors is in their contention that “Budding scientists must be taught technical skills, including statistics, and must be imbued with scepticism towards their own results and those of others.” I would further urge that scientists learn respect for their peers in other disciplines. I am fortunate to have had (and currently hold) appointments in Biomedical Basic Science departments and in Math and Statistics departments, and have been on the receiving (and alas, giving) end of interdisciplinary prejudices in both directions. Where statisticians see experimental biomedical researchers as corrupt strivers in need of policing, biologists see statisticians as uninterested in actual science and perfectly willing to hold up its progress indefinitely in the name of some imagined platonic ideal.
Maybe they’re both right. But maybe raising the next generation to be just a little more appreciative and less defensive will contribute to the continued growth of the scientific worldview we all share.
And in that vein, I’m fine with the statistical argument presented in the Economist article. It does not hew to any coherent philosophical conception of statistics, but it is clear, correct as far as it goes, and conveys a correct understanding to the reader.