science-wise screening

Fallacies of Rejection, Nouvelle Cuisine, and assorted New Monsters

images-17

Jackie Mason

Whenever I’m in London, my criminologist friend Katrin H. and I go in search of stand-up comedy. Since it’s Saturday night (and I’m in London), we’re setting out in search of a good comedy club (I’ll complete this post upon return). A few years ago we heard Jackie Mason do his shtick, a one-man show billed as his swan song to England.  It was like a repertoire of his “Greatest Hits” without a new or updated joke in the mix.  Still, hearing his rants for the nth time was often quite hilarious. It turns out that he has already been back doing another “final shtick tour” in England, but not tonight.

A sample: If you want to eat nothing, eat nouvelle cuisine. Do you know what it means? No food. The smaller the portion the more impressed people are, so long as the food’s got a fancy French name, haute cuisine. An empty plate with sauce!

As one critic wrote, Mason’s jokes “offer a window to a different era,” one whose caricatures and biases one can only hope we’ve moved beyond:

But it’s one thing for Jackie Mason to scowl at a seat in the front row and yell to the shocked audience member in his imagination, “These are jokes! They are just jokes!” and another to reprise statistical howlers, which are not jokes, to me. This blog found its reason for being partly as a place to expose, understand, and avoid them. I had earlier used this Jackie Mason opening to launch into a well-known fallacy of rejection using statistical significance tests. I’m going to go further this time around. I began by needling some leading philosophers of statistics: Continue reading

Categories: reforming the reformers, science-wise screening, Statistical power, statistical tests, Statistics | Tags: , , , ,

Beware of questionable front page articles warning you to beware of questionable front page articles (2)

RR

.

Such articles have continued apace since this blogpost from 2013. During that time, meta-research, replication studies, statistical forensics and fraudbusting have become popular academic fields in their own right. Since I regard the ‘programme’ (to use a Lakatosian term) as essentially a part of the philosophy and methodology of science, I’m all in favor of it—I employed the term “metastatistics” eons ago–but, as a philosopher, I claim there’s a pressing need for meta-meta-research, i.e., a conceptual, logical, and methodological scrutiny of presuppositions and gaps in meta-level work itself.  There was an issue I raised in the section “But what about the statistics?” below that hasn’t been addressed. I question the way size and power (from statistical hypothesis testing) are employed in a “diagnostics and screening” computation that underlies most “most findings are false” articles. (This is (2) in my new “Let PBP” series, and follows upon my last post, comments in burgandy are added, 12/5/15.)

In this time of government cut-backs and sequester, scientists are under increased pressure to dream up ever new strategies to publish attention-getting articles with eye-catching, but inadequately scrutinized, conjectures. Science writers are under similar pressures, and to this end they have found a way to deliver up at least one fire-breathing, front page article a month. How? By writing minor variations on an article about how in this time of government cut-backs and sequester, scientists are under increased pressure to dream up ever new strategies to publish attention-getting articles with eye-catching, but inadequately scrutinized, conjectures. (I’m prepared to admit that meta-research consciousness raising, like “self help books,” warrant frequent revisiting. Lessons are forgotten, and there are always new users of statistics.)

Thus every month or so we see retreads on why most scientific claims are unreliable, biased, wrong, and not even wrong. Maybe that’s the reason the authors of a recent article in The Economist (“Trouble at the Lab“) remain anonymous. (I realize that is their general policy.)  Continue reading

Categories: junk science, Let PBP, P-values, science-wise screening, Statistics

Blog at WordPress.com.