Will the Real Junk Science Please Stand Up? (critical thinking)

Equivocations about “junk science” came up in today’s “critical thinking” class; if anything, the current situation is worse than 2 years ago when I posted this.

Have you ever noticed in wranglings over evidence-based policy that it’s always one side that’s politicizing the evidence—the side whose policy one doesn’t like? The evidence on the near side, or your side, however, is solid science. Let’s call those who first coined the term “junk science” Group 1. For Group 1, junk science is bad science that is used to defend pro-regulatory stances, whereas sound science would identify errors in reports of potential risk. For the challengers—let’s call them Group 2—junk science is bad science that is used to defend the anti-regulatory stance, whereas sound science would identify potential risks, advocate precautionary stances, and recognize errors where risk is denied. Both groups agree that politicizing science is very, very bad—but it’s only the other group that does it!

A given print exposé exploring the distortions of fact on one side or the other routinely showers wild praise on their side’s—their science’s and their policy’s—objectivity, their adherence to the facts, just the facts. How impressed might we be with the text or the group that admitted to its own biases?

Take, say, global warming, genetically modified crops, electric-power lines, medical diagnostic testing. Group 1 alleges that those who point up the risks (actual or potential) have a vested interest in construing the evidence that exists (and the gaps in the evidence) accordingly, which may bias the relevant science and pressure scientists to be politically correct. Group 2 alleges the reverse, pointing to industry biases in the analysis or reanalysis of data and pressures on scientists doing industry-funded work to go along to get along.

When the battle between the two groups is joined, issues of evidence—what counts as bad/good evidence for a given claim—and issues of regulation and policy—what are “acceptable” standards of risk/benefit—may become so entangled that no one recognizes how much of the disagreement stems from divergent assumptions about how models are produced and used, as well as from contrary stands on the foundations of uncertain knowledge and statistical inference. The core disagreement is mistakenly attributed to divergent policy values, at least for the most part.

Over the years I have tried my hand in sorting out these debates (e.g., Mayo and Hollander 1991). My account of testing actually came into being to systematize reasoning from statistically insignificant results in evidence based risk policy: no evidence of risk is not evidence of no risk! (see October 5). Unlike the disputants who get the most attention, I have argued that the current polarization cries out for critical or meta-scientific scrutiny of the uncertainties, assumptions, and risks of error that are part and parcel of the gathering and interpreting of evidence on both sides. Unhappily, the disputants tend not to welcome this position—and are even hostile to it.  This used to shock me when I was starting out—why would those who were trying to promote greater risk accountability not want to avail themselves of ways to hold the agencies and companies responsible when they bury risks in fallacious interpretations of statistically insignificant results?  By now, I am used to it.

This isn’t to say that there’s no honest self-scrutiny going on, but only that all sides are so used to anticipating conspiracies of bias that my position is likely viewed as yet another politically motivated ruse. So what we are left with is scientific evidence having less and less a role in constraining or adjudicating disputes. Even to suggest an evidential adjudication risks being attacked as a paid insider.

I agree with David Michaels (2008, 61) that “the battle for the integrity of science is rooted in issues of methodology,” but winning the battle would demand something that both sides are increasingly unwilling to grant. It comes as no surprise that some of the best scientists stay as far away as possible from such controversial science.

Mayo,D. and Hollander. R. (eds.). 1991. Acceptable Evidence: Science and Values in Risk Management, Oxford.

Mayo. 1991. Sociological versus Metascientific Views of Risk Assessment, in D. Mayo and R. Hollander (eds.), Acceptable Evidence: 249-79.

Michaels, D. 2008. Doubt Is Their Product, Oxford.

Categories: critical thinking, junk science, Objectivity | Tags: , , , ,

Post navigation

16 thoughts on “Will the Real Junk Science Please Stand Up? (critical thinking)

  1. Nathan Schachtman

    Mayo,

    Interesting post. On my blog, I have tried to call out both Group I and Group II. I agree that it is revealing that some political conservatives express profound distrust of observational studies, but then embrace rather doubtful ones when those studies suggest associations between abortion and depression or breast cancer. Clearly, the examples can be multiplied on both sides.

    I would have hoped that improved understanding of statistical inference, and of meta-analytic techniques, prevent either Group from declaring victory solely because there are multiple study results without statistically significant results. Most of the disputes you reference I believe involve observational studies, and for such studies, internal and external validity considerations are often much more important sources of error than incorrect interpretation of statistical results. Bias and confounding in PM2.5 epidemiology of cardiovascular diseases certainly come to mind.

    Nathan

    • Nate: It’s interesting because your blog was the first I’d seen that challenged some of the people I’d place in Group 2, the supposed “good guys”. (And I agree with many of your criticisms, from a legal standpoint*.) The few studies I wrote on or followed were clinical trials of drugs. I include numerous methodological issues, not separable from incorrect interpretation of statistical results. But what amazes me is when group 2 rejects/suspects an appeal to evidence/method, even when it could strengthen their intended goal. It’s as if some regard appealing to data and method as insufficiently political or too sciencey. Then there’s Ziliac and McCloskey–yet a different story.

      With respect to the controversial issues I mentioned here, it is now just all politics all the time: Group 2 may have now surpassed anything Group 1 gets away with. So my experience has caused me to shift some…

      * I still disagree on Harkonen.

  2. Nathan: By chance, I noticed a recent article on Harkonen immediately after writing my comment.
    http://articles.washingtonpost.com/2013-09-23/national/42314943_1_intermune-scott-harkonen-actimmune

    Gelman has this post: http://andrewgelman.com/2013/10/03/on-house-arrest-for-p-hacking/

    The case has at least two parts, the current issue concerns “free speech”. https://errorstatistics.com/2012/12/13/bad-statistics-crime-or-free-speech/
    https://errorstatistics.com/2012/12/19/philstatlawstock-more-on-bad-statistics-schachtman/
    But the whole thing also connects to the controversial case where the Supreme Court (is thought to have) passed judgment on significance tests in relation to the Matrixx case, which gets back to Ziliac and McCloskey (please search the blog if interested).

  3. Nathan Schachtman

    Mayo,

    I know that I have failed to persuade you on the Harkonen case,but yes; I had seen Brown’s article in the Washington Post. Steve Goodman (and Don Rubin) could not join my amicus brief brief because they had been declarants in the sentencing/post-verdict challenges. Sadly, they were not involved in the trial, and their post-trial testimony was given limited weight against the Fleming orthodox view of statistical causal inference.

    And yes, the Matrixx Initiatives v. Siracusano case is front and center in several respects. There is the government’s duplicity in arguing first in Matrixx that statistical significance was not necessary for causal inference, and then second its arguing in the Harkonen case that “significance” was so important that a p = 0.004 for a non-prespecified subgroup (of a prescribed secondary endpoint where the p = 0.08 ITT, and p =0.055 per protocol) used to support a claim of efficacy (causal benefit in a randomized clinical trial otherwise agreed to have been run well and data otherwise sound), in conjunction with a prior clinical trial showing benefit (p < 0.001) and with beaucoup supporting mechanistic research on the specific interferon variety.

    The Supreme Court adopted the government's position in Matrixx (but only in non-binding dicta – an important limitation), but it was harrumphed by Ziliak as a vindication of some weird anti-Fisher proto-Bayesian views that he and his colleague advanced in an amicus brief in the case. The Matrixx decision has been widely criticized, and I think it is wrong to suggest that statistical testing is unnecessary or that statistical significance is unnecessary IN ALL CASES. Indeed, one of the cases relied upon by the court (Wells v. Ortho) for the proposition that statistical significance was unnecessary involved a plaintiffs' expert witness who had relied upon several studies, at least two of which had statistically significant results. (A point that shows that even very smart judges and their lawclerks, who do not understand statistical inference, sometimes don't even read and understand the cases that they cite!).

    If either Group I or II had evidence as good as Harkonen had in his press release, I would not fuss with their causal claims, although I might disagree. Most of the causal claims involved in the political realm involve observational, not experimental research, and the sticking points are not p-values, but internal and external validity.

    One curious example of apparent agreement between Groups I and I came in the 2008 presidential election when both McCain and Obama made pandering comments about the supposed connection between vaccines and autism. I suppose you can trace a lot of junk science back to celebrities, such as Jenny McCarthy and Oprah Winfrey, with help of course from Mr Wakefield and the plaintiffs' bar.

    • Nate: Well I’ve argued as to why the logic actually supports the differing treatment, even taking the earlier obiter dicta as just that. I can review it if need be.

      I don’t know that most controversial claims (is that what you mean by the political realm) are observational, and internal/external are just part of the interpretive mix, not distinct from what you see as statistical issues.
      Are you saying they both gave support to the connection between vaccines and autism?

      There’s probably some kind of theory one might support that whichever group is in power is more likely to criticize rivals as committing junk science, while being more likely to commit it themselves.

  4. Nathan Schachtman

    In my convoluted writing, I left out the important point about the government’s duplicity: despite the low p-value, prior clinical trial, and supporting mechanistic research, Harkonen’s statement was “false” such that it would support a conviction under the Wire Fraud Act. The Act hasn’t been used in this context for many decades, and the last time the government tried, the Supreme Court invalidated the conviction.

    • The man was found guilty and he is guilty, and it’s not some slight p-value hacking. What I wrote here last night is obviously too quick, but I give references above. I will be very curious to here how his last appeal turns out, December?

  5. Here’s something on Retraction watch today: spoofing hundreds of journals with a fake paper: http://retractionwatch.wordpress.com/2013/10/03/science-reporter-spoofs-hundreds-of-journals-with-a-fake-paper/#more-15895

  6. Nathan Schachtman

    The pending petition is a request for review. If granted, then the Court will set the case down for a full briefing and oral argument next term (Oct. 2014). The Court will likely decide whether or not to take the case by the end of this calendar year.

    The spoof was interesting. Did you notice that the bogus article was submitted to pay-to-play journals but not to any “legitimate” subscription journals? It was an uncontrolled experiment! Actually you couldn’t even calculate a p-value; just like the Matrixx case. Think how extraordinarily stronger the evidence was that Dr. Harkonen was working with!!

    • Nate: I didn’t have time to read it (the spoof), but who said they were testing the gullibility of legitimate subscriptions? It would be quite an indictment of what you call “pay to play” journals, and I’ve read serious scientists argue that it’s really dumbing down science rags, I mean journals. (Can’t cross out with comments, or rather, I don’t know how.)

      On the Hark case, I hope they don’t waste more taxpayer money on yet another year of trials, unless of course it helps you. (Can I get into trouble for saying this?)

  7. Nathan Schachtman

    No; they weren’t testing the gullibility of the so-called legiitimate journals, but my point was different. We don’t know what to make of the acceptance rate of the error-riddled manuscript unless we know how other mainstream journals handled the submission. I would love to get my hands on the manuscript that was used, but interestingly the author in Science did not make the manuscript (which is his underlying data, sort of) available.

    I doubt that there will be any further trials in the Harkonen case. If the Supreme Court takes the case, it will likely affirm or reverse and render, which means that there will be nothing to try in either event.

  8. Nathan Schachtman

    Mayo, I would urge you to read all the briefs filed in support of Supreme Court review. Given that people have a first amendment right to claim much of anything, it is pretty remarkable when a scientist accurately presents his data, and even his p-values, but the government says that his stated inference is “false.” So remarkable that the prosecution raises questions of selective prosecution, violations of due process of law, violations of freedoms of speech, etc. So there are both constitutional and statutory dimensions to the case.

    I could provide you with examples of scientists, in grant applications, who claim that a previous study “demonstrated” something or another, when actually the demonstration was was doubtful in many ways. The Court should take the case to prevent the False Claims Act to be affected by this rather doubtful judicial construction of the Wire Fraud Act. (Both turn on the “falsity” issue of the use of the word “demonstration.”) And let’s not forget good old fashioned witness perjury. If I had a penny for every expert witness who made a causal claim on weaker evidence than that used by Dr. Harkonen, I would be living a lot closer to Central Park. The point is that expert witnesses, who testifying under OATH, are not going to prison in gaggles; they are not even being excluded under Federal Rule of Evidence 702 for giving causal opinions based upon weaker in evidential support than Dr. Harkonen had at the time he issued a press release.

    The judgment in Dr. Harkonen’s case is a derelict on the sea of science jurisprudence. The Supreme Court can salvage the case, and it has the responsibility to do so.

    • They cannot remove the wire fraud, can they? Sorry, I’m forgetting the last round.

  9. I’m hearing everyone in the statistics community expects the Higgs to win the Nobel next week, although it’s not clear which individuals/institutions might get it. I have no clue if this will happen. I guess they didn’t take to heart Ziliac’s allegation that it was all junk science because it (broadly) employed significance test reasoning. Reference is in the * note at the bottom of this post: https://errorstatistics.com/2013/06/14/p-values-cant-be-trusted-except-when-used-to-argue-that-p-values-cant-be-trusted/

  10. Nathan Schachtman

    The Supreme Court would not invalidate the Wire Fraud Act, but they could invalidate the Act as applied in this case. They can do so on one of two grounds: The application of the Act to the facts of the case is beyond the intended scope of the statute OR The application of the Act to the facts of the case is within the scope of the statute Act but unconstitutional as applied.

Blog at WordPress.com.