So what’s the allegation that the prosecutors are being duplicitous about statistical evidence in the case discussed in my two previous (‘Bad Statistics’) posts? As a non-lawyer, I will ponder only the evidential (and not the criminal) issues involved.
“After the conviction, Dr. Harkonen’s counsel moved for a new trial on grounds of newly discovered evidence. Dr. Harkonen’s counsel hoisted the prosecutors with their own petards, by quoting the government’s amicus brief to the United States Supreme Court in Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011). In Matrixx, the securities fraud plaintiffs contended that they need not plead ‘statistically significant’ evidence for adverse drug effects.” (Schachtman’s part 2, ‘The Duplicity Problem – The Matrixx Motion’)
The Matrixx case is another philstat/law/stock example taken up in this blog here, here, and here. Why are the Harkonen prosecutors “hoisted with their own petards” (a great expression, by the way)?
(Corrected first sentence) The reasoning seems to go like this: Matrixx could still be found guilty of securities fraud for failing to report adverse effects related to its over-the-counter drug Zicam, even if those effects were non-statistically significant. If non-statistically significant effects should have been reported in the Matrixx case, then Harkonen’s having reported the non-statistically significant subgroup is in sync with the government’s requirement. To claim that Matrixx should report non-statistically significant risks, and then to turn around and claim that Harkonen should not report non-statistically significant benefits is apparently inconsistent.
Really?
The two cases are importantly disanalogous on a number of grounds. Specifics can be found in the Matrixx posts cited above.In fact, one might argue that the Matrixx case actually strengthens the case against Harkonen. Giving an overly rosy picture of, or downplaying, information about potential regulatory problems with a drug is likely to be deceptive for investor assessment. Even moving away from the fact that Matrixx concerns security fraud, and granting that the ruling (by the Supreme Court) mentions, as an aside (or obiter dicta), that:
(1)The absence of statistical significance does not preclude there being a warranted ground for inferring (or claiming to have evidence that) a drug caused an adverse side effect.
This is still very different from claiming
(2) The absence of statistical significance (in the case of a post-data subgroup) provides a warranted ground for inferring (or claiming to have evidence that) this drug–with its own serious side effects– has a survival benefit.
But there is a lesson: When it comes to evidence that is relevant to regulation and policy, alterations of methodological standards initially made in the interest of strengthening precautionary standpoints may be (and often are) used later to weaken precautions. Tampering with standards of evidence intended to increase the probability of revealing risks to the public tends to backfire. Admittedly, the obiter dicta at least**, in the Matrixx case, are open to lawyerly undermining the government’s position in the current case.
*Harkonen himself claimed the report was intended for investors.
**Here the “dicta” are throwaway remarks by the Supreme Court on (lack of) statistical significance and causal inference. See earlier post here.
Mayo,
A few points. First, Matrixx wasn’t found guilty of anything. The Matrixx case was a civil securities fraud case brought by unhappy investors. Second, Matrixx wasn’t found to have done anything. The case was dismissed at the trial court, reinstated on appeal. The issue on appeal was whether the pleadings including sufficient factual allegations of the claimed securities fraud. The Supreme Court held that the pleadings were adequate for the case to move forward.
Why the claim of inadequate pleadings? Matrixx kvetched that plaintiffs did not allege that the evidence withheld from them showed a statistically significant outcome. Now the government, plaintiffs, and ultimately the Supreme Court said: so what? Evidence can material to our investment decision even when it falls far short of “demonstrating” causation.
So causation was out in Matrixx, but the Court and government addressed it at length anyway. I’ve addressed the government’s brief in Matrixx at greater length today, in a new post.
So of course, the cases are disanalagous in many ways. One criminal; one civil. One involves taking of evidence; the other looked only at the complaint. One is about claiming an efficacy outcome; the other about not revealing evidence of a safety outcome.
Yes, the precautionary nature of the FDA’s mandate permits it to act upon a showing less than what the scientific community would consider “adequate” or “sufficient” evidence of causality of a harm. This is what makes facts pleaded in Matrixx relevant to the materiality requirement plaintiffs must show: an intentional misleading or false statement (or failure to disclose in the face of a duty to disclose) about a fact material to the investment community.
I don’t believe that the standard of causation is viewed differently at FDA, for safety vs. efficacy. I do think the FDA acts frequently without the felt need to have shown causation when it is acting to protect consumer safety.
The requirements of criminal wire fraud are different. Harkonen can defend on grounds that he got bad advice, or that he perceived a conflict among expert witnesses. (The government also had a very difficult time showing that anyone relied upon the press release in isolation.)
The duplicity was the government’s insistence, on a point ultimately irrelevant, but which has taken on a life of its own by federal judges who are not reading the Supreme Court’s opinion with any particular care. That point is that statistical significance is not necessary to show causation. (Recall there was nothing that could be statistically significant in Matrixx; the major allegations concerned case series. The government’s brief acknowledged this, but then improvidently pushed on.) Even though causation was eliminated from the “materiality” consideration in Matrixx, the government had painted with a very broad brush about how statistical significance wasn’t necessary for a causal inference.
In my third post on the Harkonen case, I detailed some errors in the Solicitor General’s description of statistical significance. I for one find a criminal prosecution for improperly assessing evidence or describing inferences to be distasteful, especially when the government cannot get it right.
Don’t get me wrong. The subgroup data dredging was bad statistical practice, and long before the jury returned its conviction, InterMune had to shut down its INSPIRE phase III trial designed to test the new subgroup outcome. No surprise there.
Nathan
Nathan: I corrected a sentence of my post, but I think my initial Matrixx post is clear. Still, from my standpoint, as a non-lawyer, there seems to be some ambiguity between the statistical significance of a stock price/market effect (associated with a company C) on the one hand, and the statistical significance of an effect on a person who takes a drug (sold by company C), on the other. I’m not in a position to entirely disentangle when these very different effects are entering in these two cases. But I will keep my eye out for such rulings.
I’d still like to know what, in the best case, you would like to see happen regarding the dicta in particular, and the (“distasteful”) misunderstandings of basic statistical notions in general. (Shouldn’t these prosecutors be required to take Schachtman’s course?) .
Mayo,
I suppose I would like to see the courts stop engaging in senseless generalizations without details or context. Is statistical significance necessary for every assessment of medical causation? Of course not. I gave the Ferebee case as an example, but consider also many infectious diseases. This historical studies of manganism, a movement disorder caused by very high levels of manganese, used no statistical analyses; none was needed. Everyone who worked underground at the rock face with drills got the disease; no one else did.
But what happens with the generalization is that the lower courts jump at the opportunity to close their minds in the face of epidemiologic evidence, which does require statistical analyses. The Chantix and Cheek cases were examples of the mischief generated by the process.
The statistical significance that Matrixx Initiatives wanted was not in connecting the fall of its stock price to the FDA’s announcement of concerns about the safety of Zicam. The reaction was pretty obvious; no statistical analyses were needed. So both Matrixx and Harkonen involve health outcomes; the first a safety outcome (anosmia) and the second an efficacy outcome (or lack thereof from Actimmune in IPF patients).
Your question whether the prosecutors (or other lawyers and judges) should take my course is important. After the Supreme Court decided Daubert v. Merrell Richardson in 1993, the Administrative Office of the Courts, through the Federal Judicial Center, undertook a major educational campaign to shore up the federal bench on science, math, statistics, etc. The FJC’s Reference Manual is in its 3d edition. I have fussed with some of what’s in the Manual, but generally the chapter on statistics (by David Kaye and the late David Freedman) is very good. I would be happy if judges and lawyers read, understood, and wrote briefs and opinions that incorporated this chapter, and others.
Recently, Justice Scalia gave a talk at which he railed against law school elective course, “Law & ****.” He urged students to stick to straight law courses. I think the Justice is badly mistaken here. I would love all my students to come to law school with a PhD in philosophy of science, and with a deep background in science, math, as well as literature. I believe that the law often takes shape only with a deep understanding of the realm of conduct that is being controlled through rules and regulations. If students are going to go on to practice in the area that I call “health effects” law, then they should have an understanding of what a health effect is, and what is acceptable evidence of one.
Nathan
Nathan: I’m afraid that at the moment a specialization in philosophy of science will rarely come with knowledge of statistical inference/how to scrutinize statistical methods and models. As much as they’re interested in the interface of science/evidence/modeling/knowledge, the focus for the last few decades (unlike in the 1960s, 70s, 80s) has been probability & decision/subjective Bayesianism. I’ve tried to encourage a change, as they’re missing out on a number of issues that could really advance the field. (e.g., https://errorstatistics.com/2012/11/04/philstat-so-youre-looking-for-a-ph-d-dissertation-topic/)