PhilStatLaw

Larry Laudan: “When the ‘Not-Guilty’ Falsely Pass for Innocent”, the Frequency of False Acquittals (guest post)

Larry Laudan

Larry Laudan

Professor Larry Laudan
Lecturer in Law and Philosophy
University of Texas at Austin

“When the ‘Not-Guilty’ Falsely Pass for Innocent” by Larry Laudan

While it is a belief deeply ingrained in the legal community (and among the public) that false negatives are much more common than false positives (a 10:1 ratio being the preferred guess), empirical studies of that question are very few and far between. While false convictions have been carefully investigated in more than two dozen studies, there are virtually no well-designed studies of the frequency of false acquittals. The disinterest in the latter question is dramatically borne out by looking at discussions among intellectuals of the two sorts of errors. (A search of Google Books identifies some 6.3k discussions of the former and only 144 treatments of the latter in the period from 1800 to now.) I’m persuaded that it is time we brought false negatives out of the shadows, not least because each such mistake carries significant potential harms, typically inflicted by falsely-acquitted recidivists who are on the streets instead of in prison.scot-free-1_1024x1024

 

In criminal law, false negatives occur under two circumstances: when a guilty defendant is acquitted at trial and when an arrested, guilty defendant has the charges against him dropped or dismissed by the judge or prosecutor. Almost no one tries to measure how often either type of false negative occurs. That is partly understandable, given the fact that the legal system prohibits a judicial investigation into the correctness of an acquittal at trial; the double jeopardy principle guarantees that such acquittals are fixed in stone. Thanks in no small part to the general societal indifference to false negatives, there have been virtually no efforts to design empirical studies that would yield reliable figures on false acquittals. That means that my efforts here to estimate how often they occur must depend on a plethora of indirect indicators. With a bit of ingenuity, it is possible to find data that provide strong clues as to approximately how often a truly guilty defendant is acquitted at trial and in the pre-trial process. The resulting inferences are not precise and I will try to explain why as we go along. As we look at various data sources not initially designed to measure false negatives, we will see that they nonetheless provide salient information about when and why false acquittals occur, thereby enabling us to make an approximate estimate of their frequency.

My discussion of how to estimate the frequency of false negatives will fall into two parts, reflecting the stark differences between the sources of errors in pleas and the sources of error in trials. (All the data to be cited here deal entirely with cases of crimes of violence.) Continue reading

Categories: evidence-based policy, false negatives, PhilStatLaw, Statistics | Tags: | 9 Comments

Scientism and Statisticism: a conference* (i)

images-11A lot of philosophers and scientists seem to be talking about scientism these days–either championing it or worrying about it. What is it? It’s usually a pejorative term describing an unwarranted deference to the so-called scientific method over and above other methods of inquiry. Some push it as a way to combat postmodernism (is that even still around?) Stephen Pinker gives scientism a positive spin (and even offers it as a cure for the malaise of the humanities!)[1]. Anyway, I’m to talk at a conference on Scientism (*not statisticism, that’s my word) taking place in NYC May 16-17. It is organized by Massimo Pigliucci (chair of philosophy at CUNY-Lehman), who has written quite a lot on the topic in the past few years. Information can be found here. In thinking about scientism for this conference, however, I was immediately struck by this puzzle: Continue reading

Categories: Announcement, PhilStatLaw, science communication, Statistical fraudbusting, StatSci meets PhilSci | Tags: | 15 Comments

“Murder or Coincidence?” Statistical Error in Court: Richard Gill (TEDx video)

“There was a vain and ambitious hospital director. A bad statistician. ..There were good medics and bad medics, good nurses and bad nurses, good cops and bad cops … Apparently, even some people in the Public Prosecution service found the witch hunt deeply disturbing.”

This is how Richard Gill, statistician at Leiden University, describes a feature film (Lucia de B.) just released about the case of Lucia de Berk, a nurse found guilty of several murders based largely on statistics. Gill is widely-known (among other things) for showing the flawed statistical analysis used to convict her, which ultimately led (after Gill’s tireless efforts) to her conviction being revoked. (I hope they translate the film into English.) In a recent e-mail Gill writes:

“The Dutch are going into an orgy of feel-good tear-jerking sentimentality as a movie comes out (the premiere is tonight) about the case. It will be a good movie, actually, but it only tells one side of the story. …When a jumbo jet goes down we find out what went wrong and prevent it from happening again. The Lucia case was a similar disaster. But no one even *knows* what went wrong. It can happen again tomorrow.

I spoke about it a couple of days ago at a TEDx event (Flanders).

You can find some p-values in my slides [“Murder by Numbers”, pasted below the video]. They were important – first in convicting Lucia, later in getting her a fair re-trial.”

Since it’s Saturday night, let’s watch Gill’s TEDx talk, “Statistical Error in court”.

Slides from the Talk: “Murder by Numbers”:

 

Categories: junk science, P-values, PhilStatLaw, science communication, Statistics | Tags: | Leave a comment

PhilStock: Bad news is bad news on Wall St. (rejected post)

stock picture smaillI’ve been asked for a PhilStock tip. Well, remember when it could be said that “bad news is good news on wall street“?

No longer. Now bad is bad. I call these “blood days” on the stock market, and the only statistical advice that has held up over the past turbulent years is: Never try to catch a falling knife*.

*For more, you’ll have to seek my stock blog.

Categories: PhilStock, Rejected Posts | 8 Comments

FDA’S New Pharmacovigilance

FDA’s New Generic Drug Labeling Rule

The FDA is proposing an about-face on a controversial issue: to allow (or require? [1]) generic drug companies to alter the label on drugs, whereas they are currently  required to keep the identical label as used by the brand-name company (See earlier post here and here.) While it clearly makes sense to alert the public to newly found side-effects, this change, if adopted, will open generic companies to lawsuits to which they’d been immune (as determined by a 2011 Supreme Court decision).  Whether or not the rule passes, the FDA is ready with a training session for you!  The following is from the notice I received by e-mail: Continue reading

Categories: Announcement, PhilStatLaw, science communication | 4 Comments

PhilStock: No-pain bull

stock picture smaillSee rejected posts.  

Categories: PhilStock, Rejected Posts | Leave a comment

Bad statistics: crime or free speech (II)? Harkonen update: Phil Stat / Law /Stock

photo-on-12-17-12-at-3-43-pm-e1355777379998There’s an update (with overview) on the infamous Harkonen case in Nature with the dubious title “Uncertainty on Trial“, first discussed in my (11/13/12) post “Bad statistics: Crime or Free speech”, and continued here. The new Nature article quotes from Steven Goodman:

“You don’t want to have on the books a conviction for a practice that many scientists do, and in fact think is critical to medical research,” says Steven Goodman, an epidemiologist at Stanford University in California who has filed a brief in support of Harkonen……

Goodman, who was paid by Harkonen to consult on the case, contends that the government’s case is based on faulty reasoning, incorrectly equating an arbitrary threshold of statistical significance with truth. “How high does probability have to be before you’re thrown in jail?” he asks. “This would be a lot like throwing weathermen in jail if they predicted a 40% chance of rain, and it rained.”

I don’t think the case at hand is akin to the exploratory research that Goodman likely has in mind, and the rain analogy seems very far-fetched. (There’s much more to the context, but the links should suffice.) Lawyer Nathan Schachtmen also has an update on his blog today. He and I usually concur, but we largely disagree on this one[i]. I see no new information that would lead me to shift my earlier arguments on the evidential issues. From a Dec. 17, 2012 post on Schachtman (“multiplicity and duplicity”):

So what’s the allegation that the prosecutors are being duplicitous about statistical evidence in the case discussed in my two previous (‘Bad Statistics’) posts? As a non-lawyer, I will ponder only the evidential (and not the criminal) issues involved.

“After the conviction, Dr. Harkonen’s counsel moved for a new trial on grounds of newly discovered evidence. Dr. Harkonen’s counsel hoisted the prosecutors with their own petards, by quoting the government’s amicus brief to the United States Supreme Court in Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011).  In Matrixx, the securities fraud plaintiffs contended that they need not plead ‘statistically significant’ evidence for adverse drug effects.” (Schachtman’s part 2, ‘The Duplicity Problem – The Matrixx Motion’) 

The Matrixx case is another philstat/law/stock example taken up in this blog here, here, and here.  Why are the Harkonen prosecutors “hoisted with their own petards” (a great expression, by the way)? Continue reading

Categories: PhilStatLaw, PhilStock, statistical tests, Statistics | Tags: | 23 Comments

PhilStock: Bad news is good news on Wall St.

stock picture smaillSee rejected posts.

Categories: PhilStock, Rejected Posts | Leave a comment

PhilStock: Flash Freeze

imagesA mysterious outage on the Nasdaq stock market: Trading halted for over an hour now. I don’t know if it’s a computer glitch or hacking, but I know the complex, robot-run markets are frequently out of our control. Stay tuned…

Categories: PhilStock | Leave a comment

Guest Post: Larry Laudan. Why Presuming Innocence is Not a Bayesian Prior

DSCF3726“Why presuming innocence has nothing to do with assigning low prior probabilities to the proposition that defendant didn’t commit the crime”

by Professor Larry Laudan
Philosopher of Science*

Several of the comments to the July 17 post about the presumption of innocence suppose that jurors are asked to believe, at the outset of a trial, that the defendant did not commit the crime and that they can legitimately convict him if and only if they are eventually persuaded that it is highly likely (pursuant to the prevailing standard of proof) that he did in fact commit it. Failing that, they must find him not guilty. Many contributors here are conjecturing how confident jurors should be at the outset about defendant’s material innocence.

That is a natural enough Bayesian way of formulating the issue but I think it drastically misstates what the presumption of innocence amounts to.  In my view, the presumption is not (or at least should not be)  an instruction about whether jurors believe defendant did or did not commit the crime.  It is, rather, an instruction about their probative attitudes.wavy capital

There are three reasons for thinking this:

a). asking a juror to begin a trial believing that defendant did not commit a crime requires a doxastic act that is probably outside the jurors’ control.  It would involve asking jurors  to strongly believe an empirical assertion for which they have no evidence whatsoever.  It is wholly unclear that any of us has the ability to talk ourselves into resolutely believing x if we have no empirical grounds for asserting x. By contrast, asking juries to believe that they have seen as yet no proof of defendant’s guilt is an easy belief to acquiesce in since it is obviously true. Continue reading

Categories: frequentist/Bayesian, PhilStatLaw, Statistics | 28 Comments

Phil/Stat/Law: What Bayesian prior should a jury have? (Schachtman)

wavy capitalNathan Schachtman, Esq., PC* emailed me the following interesting query a while ago:

NAS-3When I was working through some of the Bayesian in the law issues with my class, I raised the problem of priors of 0 and 1 being off “out of bounds” for a Bayesian analyst.  I didn’t realize then that the problem had a name:  Cromwell’s Rule.

My point was then, and more so now, what is the appropriate prior the jury should have when it is sworn?  When it hears opening statements?  Just before the first piece of evidence is received?

Do we tell the jury that the defendant is presumed innocent, which means that it’s ok to entertain a very, very small prior probability of guilt, say no more than 1/N, where N is the total population of people? This seems wrong as a matter of legal theory.  But if the prior = 0, then no amount of evidence can move the jury off its prior.

*Schachtman’s legal practice focuses on the defense of product liability suits, with an emphasis on the scientific and medico-legal issues.  He teaches a course in statistics in the law at the Columbia Law School, NYC. He also has a legal blog here.

Categories: PhilStatLaw, Statistics | Tags: | 27 Comments

PhilStatLaw: Reference Manual on Scientific Evidence (3d ed) on Statistical Significance (Schachtman)

Memory Lane: One Year Ago on error statistics.com

A quick perusal of the “Manual” on Nathan Schachtman’s legal blog shows it to be chock full of revealing points of contemporary legal statistical philosophy.  The following are some excerpts, read the full blog here.   I make two comments at the end.

July 8th, 2012

Nathan Schachtman

How does the new Reference Manual on Scientific Evidence (RMSE3d 2011) treat statistical significance?  Inconsistently and at times incoherently.

Professor Berger’s Introduction

In her introductory chapter, the late Professor Margaret A. Berger raises the question of the role statistical significance should play in evaluating a study’s support for causal conclusions:

“What role should statistical significance play in assessing the value of a study? Epidemiological studies that are not conclusive but show some increased risk do not prove a lack of causation. Some courts find that they therefore have some probative value, 62 at least in proving general causation. 63”

Margaret A. Berger, “The Admissibility of Expert Testimony,” in RMSE3d 11, 24 (2011).

This seems rather backwards.  Berger’s suggestion that inconclusive studies do not prove lack of causation seems nothing more than a tautology.  And how can that tautology support the claim that inconclusive studies “therefore ” have some probative value? This is a fairly obvious logical invalid argument, or perhaps a passage badly in need of an editor.

…………

Chapter on Statistics

The RMSE’s chapter on statistics is relatively free of value judgments about significance probability, and, therefore, a great improvement upon Berger’s introduction.  The authors carefully describe significance probability and p-values, and explain:

“Small p-values argue against the null hypothesis. Statistical significance is determined by reference to the p-value; significance testing (also called hypothesis testing) is the technique for computing p-values and determining statistical significance.”

David H. Kaye and David A. Freedman, “Reference Guide on Statistics,” in RMSE3d 211, 241 (3ed 2011).  Although the chapter confuses and conflates Fisher’s interpretation of p-values with Neyman’s conceptualization of hypothesis testing as a dichotomous decision procedure, this treatment is unfortunately fairly standard in introductory textbooks.

Kaye and Freedman, however, do offer some important qualifications to the untoward consequences of using significance testing as a dichotomous outcome:

“Artifacts from multiple testing are commonplace. Because research that fails to uncover significance often is not published, reviews of the literature may produce an unduly large number of studies finding statistical significance.111 Even a single researcher may examine so many different relationships that a few will achieve statistical significance by mere happenstance. Almost any large data set—even pages from a table of random digits—will contain some unusual pattern that can be uncovered by diligent search. Having detected the pattern, the analyst can perform a statistical test for it, blandly ignoring the search effort. Statistical significance is bound to follow.

There are statistical methods for dealing with multiple looks at the data, which permit the calculation of meaningful p-values in certain cases.112 However, no general solution is available, and the existing methods would be of little help in the typical case where analysts have tested and rejected a variety of models before arriving at the one considered the most satisfactory (see infra Section V on regression models). In these situations, courts should not be overly impressed with claims that estimates are significant. Instead, they should be asking how analysts developed their models.113 ”

Id. at 256 -57.  This qualification is omitted from the overlapping discussion in the chapter on epidemiology, where it is very much needed. Continue reading

Categories: P-values, PhilStatLaw, significance tests | Tags: , , , , | 6 Comments

Phil/Stat/Law: 50 Shades of gray between error and fraud

500x307-embo-reports-vol-73-meeting-report-fig-1-abcAn update on the Diederik Stapel case: July 2, 2013, The Scientist, “Dutch Fraudster Scientist Avoids Jail”.

Two years after being exposed by colleagues for making up data in at least 30 published journal articles, former Tilburg University professor Diederik Stapel will avoid a trial for fraud. Once one of the Netherlands’ leading social psychologists, Stapel has agreed to a pre-trial settlement with Dutch prosecutors to perform 120 hours of community service.

According to Dutch newspaper NRC Handeslblad, the Dutch Organization for Scientific Research awarded Stapel $2.8 million in grants for research that was ultimately tarnished by misconduct. However, the Dutch Public Prosecution Service and the Fiscal Information and Investigation Service said on Friday (June 28) that because Stapel used the grant money for student and staff salaries to perform research, he had not misused public funds. …

In addition to the community service he will perform, Stapel has agreed not to make a claim on 18 months’ worth of illness and disability compensation that he was due under his terms of employment with Tilburg University. Stapel also voluntarily returned his doctorate from the University of Amsterdam and, according to Retraction Watch, retracted 53 of the more than 150 papers he has co-authored.

“I very much regret the mistakes I have made,” Stapel told ScienceInsider. “I am happy for my colleagues as well as for my family that with this settlement, a court case has been avoided.”

No surprise he’s not doing jail time, but 120 hours of community service?  After over a decade of fraud, and tainting 14 of 21 of the PhD theses he supervised?  Perhaps the “community service” should be to actually run the experiments he had designed?  What about his innocence of misusing public funds? Continue reading

Categories: PhilStatLaw, spurious p values, Statistics | 13 Comments

PhilStock: The Great Taper Caper

stock picture smaillSee Rejected Posts.

Categories: PhilStock, Rejected Posts | Leave a comment

PhilStock: Topsy-Turvy Game

stock picture smaillSee rejected posts.  

Categories: PhilStock, Rejected Posts | Leave a comment

Schachtman: High, Higher, Highest Quality Research Act

wavy capitalSince posting on the High Quality Research act a few weeks ago, I’ve been following it in the news, have received letters from professional committees (asking us to write letters), and now see that  Nathan A. Schachtman, Esq., PC posted the following on May 25, 2013 on his legal blog*:

NAS-3“The High Quality Research Act” (HQRA), which has not been formally introduced in Congress, continues to draw attention. SeeClowns to the left of me, Jokers to the right.”  Last week, Sarewitz suggests that “the problem” is the hype about the benefits of pure research and the let down that results from the realization that scientific progress is “often halting and incremental,” with much research not “particularly innovative or valuable.”  Fair enough, but why is this Congress such an unsophisticated consumer of scientific research in the 21st century?  How can it be a surprise that the scientific community engages in the same rent-seeking behaviors as do other segments of our society? Has it escaped Congress’s attention that scientists are subject to enthusiasms and group think, just like, … congressmen?

Nature published an editorial piece suggesting that the HQRA is not much of a threat. Daniel Sarewitz, “Pure hype of pure research helps no one, ” 497 Nature 411 (2013).

Still, Sarewitz believes that the HQRA bill is not particularly threatening to the funding of science:

“In other words, it’s not a very good bill, but neither is it much of a threat. In fact, it’s just the latest skirmish in a long-running battle for political control over publicly funded science — one fought since at least 1947, when President Truman vetoed the first bill to create the NSF because it didn’t include strong enough lines of political accountability.”

This sanguine evaluation misses the effect of the superlatives in the criteria for National Science Foundation funding:

“(1) is in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;

(2) is the finest quality, is ground breaking, and answers questions or solves problems that are of utmost importance to society at large; and

(3) is not duplicative of other research projects being funded by the Foundation or other Federal science agencies.” Continue reading

Categories: evidence-based policy, PhilStatLaw, Statistics | Tags: | 12 Comments

PhilStock: DO < $70

stock picture smaillSee rejected posts.

Categories: PhilStock, Rejected Posts | Leave a comment

New PhilStock

stock picture smaillSee Rejected Posts: Beyond luck or method.

Categories: PhilStock, Rejected Posts | 1 Comment

PhilStat/Law/Stock: more on “bad statistics”: Schachtman

NAS-3Nathan Schachtman has an update on the case of U.S. v. Harkonen discussed in my last 3 posts: here, here, and here.

United States of America v. W. Scott Harkonen, MD — Part III

Background

The recent oral argument in United States v. Harkonen (see “The (Clinical) Trial by Franz Kafka” (Dec. 11, 2012)), pushed me to revisit the brief filed by the Solicitor General’s office in Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011).  One of Dr. Harkonen’s post-trial motions contended that the government’s failure to disclose its Matrixx amicus brief deprived him of a powerful argument that would have resulted from citing the language of the brief, which disparaged the necessity of statistical significance for “demonstrating” causal inferences. See “Multiplicity versus Duplicity – The Harkonen Conviction” (Dec. 11, 2012). Continue reading

Categories: PhilStatLaw, PhilStock, Statistics | Leave a comment

PhilStat/Law/Stock: multiplicity and duplicity

Photo on 12-17-12 at 3.43 PMSo what’s the allegation that the prosecutors are being duplicitous about statistical evidence in the case discussed in my two previous (‘Bad Statistics’) posts? As a non-lawyer, I will ponder only the evidential (and not the criminal) issues involved.

“After the conviction, Dr. Harkonen’s counsel moved for a new trial on grounds of newly discovered evidence. Dr. Harkonen’s counsel hoisted the prosecutors with their own petards, by quoting the government’s amicus brief to the United States Supreme Court in Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011).  In Matrixx, the securities fraud plaintiffs contended that they need not plead ‘statistically significant’ evidence for adverse drug effects.” (Schachtman’s part 2, ‘The Duplicity Problem – The Matrixx Motion’) 

The Matrixx case is another philstat/law/stock example taken up in this blog here, here, and here.  Why are the Harkonen prosecutors “hoisted with their own petards” (a great expression, by the way)? Continue reading

Categories: PhilStatLaw, PhilStock, Statistics | Tags: | 4 Comments

Blog at WordPress.com.