What do these share in common: m&ms, limbo stick, ovulation, Dale Carnegie? Sat night potpourri

images-2

For entertainment only

Here’s the follow-up to my last (reblogged) post. initially here. My take hasn’t changed much from 2013. Should we be labeling some pursuits “for entertainment only”? Why not? (See also a later post on the replication crisis in psych.)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I had said I would label as pseudoscience or questionable science any enterprise that regularly permits the kind of ‘verification biases’ in the statistical dirty laundry list.  How regularly? (I’ve been asked)

Well, surely if it’s as regular as, say, much of social psychology, it goes over the line. But it’s not mere regularity, it’s the nature of the data, the type of inferences being drawn, and the extent of self-scrutiny and recognition of errors shown (or not shown). The regularity is just a consequence of the methodological holes. My standards may be considerably more stringent than most, but quite aside from statistical issues, I simply do not find hypotheses well-tested if they are based on “experiments” that consist of giving questionnaires. At least not without a lot more self-scrutiny and discussion of flaws than I ever see. (There may be counterexamples.)

Attempts to recreate phenomena of interest in typical social science “labs” leave me with the same doubts. Huge gaps often exist between elicited and inferred results. One might locate the problem under “external validity” but to me it is just the general problem of relating statistical data to substantive claims.

Experimental economists (expereconomists) take lab results plus statistics to warrant sometimes ingenious inferences about substantive hypotheses.  Vernon Smith (of the Nobel Prize in Econ) is rare in subjecting his own results to “stress tests”.  I’m not withdrawing the optimistic assertions he cites from EGEK (Mayo 1996) on Duhem-Quine (e.g., from “Rhetoric and Reality” 2001, p. 29). I’d still maintain, “Literal control is not needed to attribute experimental results correctly (whether to affirm or deny a hypothesis). Enough experimental knowledge will do”.  But that requires piece-meal strategies that accumulate, and at least a little bit of “theory” and/or a decent amount of causal understanding.[1]

I think the generalizations extracted from questionnaires allow for an enormous amount of “reading into” the data. Suddenly one finds the “best” explanation. Questionnaires should be deconstructed for how they may be misinterpreted, not to mention how responders tend to guess what the experimenter is looking for. (I’m reminded of the current hoopla over questionnaires on breadwinners, housework and divorce rates!) I respond with the same eye-rolling to just-so story telling along the lines of evolutionary psychology.

I apply the “Stapel test”: Even if Stapel had bothered to actually carry out the data-collection plans that he so carefully crafted, I would not find the inferences especially telling in the least. Take for example the planned-but-not-implemented study discussed in the recent New York Times article on Stapel:

 Stapel designed one such study to test whether individuals are inclined to consume more when primed with the idea of capitalism. He and his research partner developed a questionnaire that subjects would have to fill out under two subtly different conditions. In one, an M&M-filled mug with the word “kapitalisme” printed on it would sit on the table in front of the subject; in the other, the mug’s word would be different, a jumble of the letters in “kapitalisme.” Although the questionnaire included questions relating to capitalism and consumption, like whether big cars are preferable to small ones, the study’s key measure was the amount of M&Ms eaten by the subject while answering these questions….Stapel and his colleague hypothesized that subjects facing a mug printed with “kapitalisme” would end up eating more M&Ms.

Stapel had a student arrange to get the mugs and M&Ms and later load them into his car along with a box of questionnaires. He then drove off, saying he was going to run the study at a high school in Rotterdam where a friend worked as a teacher.

Stapel dumped most of the questionnaires into a trash bin outside campus. At home, using his own scale, he weighed a mug filled with M&Ms and sat down to simulate the experiment. While filling out the questionnaire, he ate the M&Ms at what he believed was a reasonable rate and then weighed the mug again to estimate the amount a subject could be expected to eat. He built the rest of the data set around that number. He told me he gave away some of the M&M stash and ate a lot of it himself. “I was the only subject in these studies,” he said.

He didn’t even know what a plausible number of M&Ms consumed would be! But never mind that, observing a genuine “effect” in this silly study would not have probed the hypothesis. Would it?

II. Dancing the pseudoscience limbo: How low should we go?

images-1

.

 

Should those of us serious about improving the understanding of statistics be expending ammunition on studies sufficiently crackpot to lead CNN to withdraw reporting on a resulting (published) paper?

“Last week CNN pulled a story about a study purporting to demonstrate a link between a woman’s ovulation and how she votes, explaining that it failed to meet the cable network’s editorial standards. The story was savaged online as “silly,” “stupid,” “sexist,” and “offensive.” Others were less nice.”

That’s too low down for me.…(though it’s good for it to be in Retraction Watch). Even stooping down to the level of  “The Journal of Psychological Pseudoscience” strikes me as largely a waste of time–for meta-methodological efforts at least. January 25, 2015 note: Given the replication projects, and the fact that a meta-methodological critique of them IS worthwhile, this claim should be qualified. Remember this post was first blogged in June, 2013. 

I was hastily making these same points in an e-mail to A. Gelman just yesterday:

E-mail to Gelman: Yes, the idea that X should be published iff a p<.05 in an interesting topic is obviously crazy.

I keep emphasizing that the problems of design and of linking stat to substantive are the places to launch a critique, and the onus is on the researcher to show how violations are avoided.  … I haven’t looked at the ovulation study (but this kind of thing has been done a zillion times) and there are a zillion confounding factors and other sources of distortion that I know were not ruled out. I’m prepared to abide such studies as akin to Zoltar at the fair [Zoltar the fortune teller]. Or, view it as a human interest story—let’s see what amusing data they collected, […oh, so they didn’t even know if women they questioned were ovulating]. You talk of top psych journals, but I see utter travesties in the ones you call top. I admit I have little tolerance for this stuff, but I fail to see how adopting a better statistical methodology could help them. …

Look, there aren’t real regularities in many, many areas–better statistics could only reveal this to an honest researcher. If Stapel actually collected data on M&M’s and having a mug with “Kapitalism” in front of subjects, it would still be B.S.! There are a lot of things in the world I consider crackpot. They may use some measuring devices, and I don’t blame those measuring devices simply because they occupy a place in a pseudoscience or “pre-science” or “a science-wannabe”. Do I think we should get rid of pseudoscience? Yes! [At least if they have pretensions to science, and are not described as “for entertainment purposes only”[2].] But I’m afraid this would shut down [or radically redescribe] a lot more fields than you and most others would agree to.  So it’s live and let live, and does anyone really think it’s hurting honest science very much?

There are fields like (at least parts of) experimental psychology that have been trying to get scientific by relying on formal statistical methods, rather than doing science. We get pretensions to science, and then when things don’t work out, they blame the tools. First, significance tests, then confidence intervals, then meta-analysis,…do you think these same people are going to get the cumulative understanding they seek when they move to Bayesian methods? Recall [Frank] Schmidt in one of my Saturday night comedies, rhapsodizing about meta-analysis:

“It means that the behavioral and social sciences can attain the status of true sciences: they are not doomed forever to the status of quasi-sciences or pseudoscience. ..[T]he gloom, cynicism, and nihilism that have enveloped many in the behavioral and social sciences is lifting. Young people starting out in the behavioral and social sciences today can hope for a much brighter future.”(Schmidt 1996)

III. Dale Carnegie salesman fallacy:

It’s not just that bending over backwards to criticize the most blatant abuses of statistics is a waste of time. I also think dancing the pseudoscientific limbo too low has a tendency to promote its very own fallacy! I don’t know if it has a name, so I made one up. Carnegie didn’t mean this to be used fallaciously, but merely as a means to a positive sales pitch for an idea, call it H. You want to convince a person of H? Get them to say yes to a series of claims first, then throw in H and let them make the leap to accept H too. “You agree that the p-values in the ovulation study show nothing?” “Yes” “You agree that study on bicep diameter is bunk?” “Yes, yes”, and  “That study on ESP—pseudoscientific, yes?” “Yes, yes, yes!” Then announce, “I happen to favor operational probalogist statistics (H)”. Nothing has been said to advance H, no reasons have been given that it avoids the problems raised. But all those yeses may well lead the person to say yes to H, and to even imagine an argument has been given. Dale Carnegie was a shrewd man.

Note: added Jan 24, 2015: You might be interested in the (brief) exchange between Gelman and I in the comments from the original post.
Of relevance was the later post on the replication crisis in psychology: https://errorstatistics.com/2014/06/30/some-ironies-in-the-replication-crisis-in-social-psychology-1st-installment/

[1] Vernon Smith ends his paper:

My personal experience as an experimental economist since 1956 resonates, well with Mayo’s critique of Lakatos: “Lakatos, recall, gives up on justifying control; at best we decide—by appeal to convention—that the experiment is controlled. … I reject Lakatos and others’ apprehension about experimental control. Happily, the image of experimental testing that gives these philosophers cold feet bears little resemblance to actual experimental learning. Literal control is not needed to correctly attribute experimental results (whether to affirm or deny a hypothesis). Enough experimental knowledge will do. Nor need it be assured that the various factors in the experimental context have no influence on the result in question—far from it. A more typical strategy is to learn enough about the type and extent of their influences and then estimate their likely effects in the given experiment”.  [Mayo EGEK 1996, 240].  V. Smith, “Method in Experiment: Rhetoric and Reality” 2001, 29.

My example in this chapter was linking statistical models in experiments on Brownian motion (by Brown).

[2] I actually like Zoltar (or Zoltan) fortune telling machines, and just the other day was delighted to find one in a costume store on 21st St.

 

Categories: junk science, Statistical fraudbusting, Statistics | 3 Comments

Post navigation

3 thoughts on “What do these share in common: m&ms, limbo stick, ovulation, Dale Carnegie? Sat night potpourri

  1. Now that we have replication and reproduction research in social psych, the question of the payoff of putting a lot of effort into replicating the statistics and such reoccurs. Even they don’t consider the “chump effects” on the order of those ovulation “studies”. The general public already knows what’s wrong with them, even if the science writers–some of them–can’t resist writing splashy articles on them. But there is a real question as to the relevance and positive upshot of this work because, while I don’t claim to have studied more than a handful of the articles, I don’t get the sense that a methodological critique is emerging. The way they’ve gone about it strikes me as bizarre, while impressive in its magnitude. At times it appears to even supply a source of tearing down the statistical methodology rather than recognizing how much of the problem has little if anything to do with the statistics. Why does psych tend so often to have the mindset that all they need is a better formal methodology? With rare exceptions I don’t see a growth of understanding of scientific inquiry, and now that QRPs are more prevalent, we even have people declaring it’s the norm. (see last two posts).

  2. I inserted “much of” social psychology in reaction to some twitterers in the field. It’s interesting that this was June 2013, before the replication projects, and I don’t feel they’re helping so far. I don’t think all useful or interesting enterprises have to be sciences; but they shouldn’t pretend to be them. I think there is some work in philosophy that has tried to be scientific or “naturalized” or made “experimental” that is making it bad philosophy. I don’t want to generalize 1000%, but this is a blog remember. If everything I said here had to be on the same level as published writing, I wouldn’t have a blog.

  3. Strange. I received around 100 notifications on twitter last night reacting to this blog, and not a single comment. I spoze I should be relieved, some claimed I was defaming social psych. I even added “much of” in the second paragraph to placate them. I reminded them that I was admitting my standards are higher than most. But these two reposts (this one and the last) were encouraged by the one before in which we heard the view that since QRPs are so common in this field, committing them is actually “in accordance with” normal scientific practice, for that field. So I reran these two posts from 3013. I considered though whether things had changed now that the social psych people are busy restoring credibility to their field via replication and reproducibility research. That’s why I said at the start that my views hadn’t “changed much”.
    https://errorstatistics.com/2015/01/21/some-statistical-dirty-laundry-2/

I welcome constructive comments that are of relevance to the post and the discussion, and discourage detours into irrelevant topics, however interesting, or unconstructive declarations that "you (or they) are just all wrong". If you want to correct or remove a comment, send me an e-mail. If readers have already replied to the comment, you may be asked to replace it to retain comprehension.

Blog at WordPress.com.