Good Scientist Badge of Approval?

In an attempt to fix the problem of “unreal” results in science some have started a “reproducibility initiative”. Think of the incentive for being explicit about how the results were obtained the first time….But would researchers really pay to have their potential errors unearthed in this way?  Even for a “good scientist” badge of approval?

August 14, 2012

Fixing Science’s Problem of ‘Unreal’ Results: “Good Scientist: You Get a Badge!”

Carl Zimmer, Slate

As a young biologist, Elizabeth Iorns did what all young biologists do: She looked around for something interesting to investigate. Having earned a Ph.D. in cancer biology in 2007, she was intrigued by a paper that appeared the following year in Nature. Biologists at the University of California-Berkeley linked a gene called SATB1 to cancer. They found that it becomes unusually active in cancer cells and that switching it on in ordinary cells made them cancerous. The flipside proved true, too: Shutting down SATB1 in cancer cells returned them to normal. The results raised the exciting possibility that SATB1 could open up a cure for cancer. So Iorns decided to build on the research.

There was just one problem. As her first step, Iorns tried replicate the original study. She couldn’t. Boosting SATB1 didn’t make cells cancerous, and shutting it down didn’t make the cancer cells normal again.

For some years now, scientists have gotten increasingly worried about replication failures. In one recent example, NASA made a headline-grabbing announcement in 2010 that scientists had found bacteria that could live on arsenic—a finding that would require biology textbooks to be rewritten. At the time, many experts condemned the paper as a poor piece of science that shouldn’t have been published. This July, two teams of scientists reported that they couldn’t replicate the results.

Nobody got harmed by believing that a species of bacteria in a California lake could feed on arsenic. But there are lives on the line when scientists like Iorns can’t replicate a medical study. Nor is Iorns’ experience a fluke. C. Glenn Begley, who spent a decade in charge of global cancer research at the biotech giant Amgen, recently dispatched 100 Amgen scientists to replicate 53 landmark experiments in cancer—the kind of experiments that lead pharmaceutical companies to sink millions of dollars to turn the results into a drug. In March Begley published the results: They failed to replicate 47 of them.

Outright fraud probably accounts for a small fraction of such failures. In other cases, scientists may unconsciously ignore their own negative evidence and focus on the findings that provide a positive result. They may set up their experiments poorly. They may have gotten positive results thanks simply to chance.

There’s nothing wrong with being wrong in science. Science is supposed to move forward as scientists test out one another’s ideas and results. But 21st-century science struggles to live up to this ideal. Scientific journals prize flashy, original papers (in part because journalists like me write about them). A disappointing follow-up simply doesn’t have the same cachet.

After her own rough experience with replication, Iorns went on to become an assistant professor at the University of Miami. Last year she also became an entrepreneur, starting up a firm called Science Exchange that brings together scientists with companies that can perform the services they need—everything from sequencing DNA to producing a genetically engineered mouse. And today she’s using Science Exchange to launch a service called the Reproducibility Initiative. If it works, it could be a strong medicine for what ails science these days.

Here’s how it is supposed to work. Let’s say you have found a drug that shrinks tumors. You write up your results, which are sexy enough to get into Nature or some other big-name journal. You also send the Reproducibility Initiative the details of your experiment and request that someone reproduce it. A board of advisers matches you up with a company with the experience and technology to do the job. You pay them to do the job—Iorns estimates the bill for replication will be about 10 percent of the original research costs—and they report back whether they got the same results.

Why would you do this? For one thing, you’ll get a second paper out of the experience, and scientists are judged in part by the number of papers on their CV. Scientists often find it hard to publish replication studies. Iorns had to send her SATB1 paper to a number of journals before getting it published, despite the fact that it revealed that investigating SATB1 for a cancer cure would be a waste of time. The journal PLoS ONEhas agreed to publish any study that comes out of the Reproducibility Initiative.

 READ THE REST OF THE ARTICLE

Categories: philosophy of science, Philosophy of Statistics | Tags: , , ,

Post navigation

12 thoughts on “Good Scientist Badge of Approval?

  1. Would any scientist pay to have their results replicated unless journals started requiring it? I suspect not, but I could be wrong. It would be wonderful if journals did start requiring it.

  2. gsganden

    The Reproducibility Initiative is a lovely idea, but I suspect that few if any scientists will pay to have their results replicated unless doing so becomes a de facto requirement for getting published in a top journal.

  3. Corey

    The editors of top journals such as Science and Nature might start imposing such a requirement if the prevalence of academics referring to those journals as “the tabloids” continues to increase.

    • Corey: Do they really? I wasn’t aware of that, aside from a couple of problem cases.

  4. Mark

    Seems like a good idea, although maybe a little off the mark (or maybe I’m just reading it wrong). Shouldn’t any study that submitted their work for potential replication get some sort of “badge”, regardless of whether the results were able to be replicated or not? Since replication is the heart of science and *any* positive result could potentially be a type I error, regardless of the observed p-value, it would seem to me that any attempt at replication is equally beneficial to science as a whole. Or maybe I’m just misreading it that only positive replications would get the “seal”…

  5. Mark: I don’t think they meant literally to get a “seal”, but if the thing ever caught on it could be seen as earning a kind of merit badge.

  6. Jean

    The philosopher of science, Helen Longino, has long argued (in a Popperian spirit) that science needs to reward criticism (e.g., failure to replicate in this article) as much as original investigations. Otherwise, no one, and certainly not the top tier, will see that type of work as important, and so it won’t get done. But then, science loses its critical and self-correcting nature. Longino envisioned journals of criticism, which would publish, for example, results of attempted replications.

  7. Ummad

    I’ve replicated a couple of studies. When the results are in line with original study but I generalize them to new data set and with improved econometric, they are rejected as having nothing ‘significantly’ new. And when the replication do not approve the results of the original study, they are rejected as not having backed by theory. So it appears that our discipline and its journals are paying mere lip service to replication. No one is serious about it.

    • Ummad: Hmm, I wonder if it’s mainly of interest in medicine/biology, especially where others will be interested to build on the work. In economics I’m only familiar with the importance of replication in experimental economics (ExperEcon). I’m not sure what area your work was in.

Blog at WordPress.com.