Stapel’s Fix for Science? Admit the story you want to tell and how you “fixed” the statistics to support it!



Stapel’s “fix” for science is to admit it’s all “fixed!”

That recent case of the guy suspected of using faked data for a study on how to promote support for gay marriage in a (retracted) paper, Michael LaCour, is directing a bit of limelight on our star fraudster Diederik Stapel (50+ retractions).

The Chronicle of Higher Education just published an article by Tom Bartlett:Can a Longtime Fraud Help Fix Science? You can read his full interview of Stapel here. A snippet:

You write that “every psychologist has a toolbox of statistical and methodological procedures for those days when the numbers don’t turn out quite right.” Do you think every psychologist uses that toolbox? In other words, is everyone at least a little bit dirty?

Stapel: In essence, yes. The universe doesn’t give answers. There are no data matrices out there. We have to select from reality, and we have to interpret. There’s always dirt, and there’s always selection, and there’s always interpretation. That doesn’t mean it’s all untruthful. We’re dirty because we can only live with models of reality rather than reality itself. It doesn’t mean it’s all a bag of tricks and lies. But that’s where the inconvenience starts.

I think the solution is in accepting this and saying these are the tips and tricks, and this is the story I want to tell, and this is how I did it, instead of trying to pose as if it’s real. We should be more open about saying, I’m using this trick, this statistical method, and people can figure out for themselves. It’s the illusion that these models are one-to-one descriptions of reality. That’s what we hope for, but that’s of course not true. 

This is our “dirty hands” argument, so often used these days, coupled with claims of so-called “perverse incentives,” to excuse QRPs (questionable research practices), bias, and flat out cheating. The leap from “our models are invariably idealizations” to “we all have dirty hands” to “statistical tricks cannot be helped,” may inadvertantly be encouraged by some articles on how to “fix” science.

Earlier in the interview:

You mention lots of possible reasons for your fraud: laziness, ambition, a short attention span. One of the more intriguing reasons to me — and you mention it twice in the book — is nihilism. Do you mean that? Did you think of yourself as a nihilist? Then or now?

Stapel: I’m not sure I’m a nihilist. ….

Did you think of the work you were doing as meaningful?

Stapel: I was raised in the 1980s, at the height of postmodernism, and that was something I related to. I studied many of the French postmodernists. That made me question meaningfulness. I had a hard time explaining the meaningfulness of my work to students.

I’ll bet.

 I agree with Bartlett that you don’t have to have any sympathy with a fraudster to possibly learn from him about preventing doctored statistics, or sharpening fraudbusting skills, except that it turns out Stapel really and truly believes science is a fraud![ii]  In his pristine accomplishment of using no data at all, rather than merely subjecting them to extraordinary rendition (leaving others to wrangle over the fine points of statistics), you could say that Stapel is the ultimate, radical, postmodern scientific anarchist. Stapel is a personable guy, and I’ve had some interesting exchanges with him; but on that basis, from his “Fictionfactory,” and autobiography, “Derailment”, I say he’s the wrong person to ask. He still doesn’t get it!


[i]There are several posts on this blog that discuss Stapel:

Some Statistical Dirty Laundry

Derailment: Faking Science: A true story of academic fraud, by Diederik Stapel (translated into English)
Should a “fictionfactory” peepshow be barred from a festival on “Truth and Reality”? Diederik Stapel says no

How to hire a fraudster chauffeur (includes video of Stapel’s TED talk)

50 shades of grey between error and fraud

Thinking of Eating Meat Causes Antisocial behavior

[ii] At least social science, social psychology. He may be right that the effects are small or uninteresting in social psych.

Categories: junk science, Statistics

Post navigation

11 thoughts on “Stapel’s Fix for Science? Admit the story you want to tell and how you “fixed” the statistics to support it!

  1. Maybe we should call “days when the numbers don’t turn out quite right” bad data days.

  2. Nathan Schachtman


    Thanks for calling out attention to the interview. Stapel’s acknowledgement that he was influenced by post-modernism is interesting and revealing. At times, Stapel seems to lapse into post-modernist gobbly-gook, which perhaps is why you say “he doesn’t get it.” Stapel’s prescription for group involvement as a preventive measure may chill outright data manipulation, but it still seems that lots of really bad science comes out in papers with multiple authors. And for many “hot” topics in the biological sciences, at any rate, the research community interested in the issues is often insular and prone to group think.


    • Hi Nate: Good to hear from you. He had mentioned postmodernism before, and it’s apparent in his Fictionfactory. I was reading “Derailment” last night,looking over my links to him, and parts of it make me think he’s pegged the field correctly after all. He was already fudging the complex theories that weren’t getting him anywhere. He’s correct about the demand for everything to be simple, entertaining (not just publications but classes)–at least in fields that aren’t curing cancer. The legitimate studies he did obtained genuine effects only small and with lots of context dependency. But if that’s all one can expect in the field, it’s just wrong to reward those who are getting sharp, dramatic effects every time. That’s why Richard Gill had said, about a different case, that they’re forced into QRPs in this field to publish. To me the lesson is, it’s bad science, questionable science, so don’t do it (or say it’s for entertainment only). But who is to say small, variable, context-dependent priming effects aren’t worthwhile? Maybe telling it all up front is the way to go after all–if it’s done right. e.g., we had this plausible theory about messy places and stereotypes, we handed out questionnaires, got mixed results, focused on the data from the messy train stations between 2-4, tossed out anyone who was eating while filling out a form etc. No, no, it would be a silly sham. Move on to a different field.

      • > Move on to a different field.
        I love Frued’s comment reported in A project for a scientific psychology “a scientific psychology is unlikely to be possible in my life time, but I have no intention of changing careers!”

        Keith O’Rourke

  3. David Oliver

    To the issue of NHST a/k/a science by strawman slaying, I’d like to know if you have an opinion about so-called “futility study design”. It’s becoming increasingly popular in neuroscience and seems a partial solution at least to the problem of too many false positives in bio-medicine. Here’s an overview: “The Utility of Futility” –

    Thanks in advance and apologies if this is a topic you’ve already addressed.

    • Had never heard of it, I’ll check it out.

    • David: On a quick glance: this looks very interesting. I will ask my stat-drug expert Stephen Senn. One thing I noticed was a mention of an odds ratio using a ratio of alpha and power. I always find this problematic.

  4. Mayo:

    Why are you so sure that Stapel “really and truly believes science is a fraud”? He’s done a lot of lying in the past; what makes you think that this time he’s telling the truth about his beliefs? It seems plausible to me that he’s just saying what he thinks will make him look good.

    • Andrew: I don’t think it makes him look good. I have had direct exchanges with him on this, and then there’s Fictionfactory. Of course, that’s a vague description (that science is a fraud), I meant that he believes much of current science is, not that it absolutely had to be.

  5. Nathan Schachtman

    Sounds like the liar’s paradox!

Blog at