Guest Post: Larry Laudan. Why Presuming Innocence is Not a Bayesian Prior

DSCF3726“Why presuming innocence has nothing to do with assigning low prior probabilities to the proposition that defendant didn’t commit the crime”

by Professor Larry Laudan
Philosopher of Science*

Several of the comments to the July 17 post about the presumption of innocence suppose that jurors are asked to believe, at the outset of a trial, that the defendant did not commit the crime and that they can legitimately convict him if and only if they are eventually persuaded that it is highly likely (pursuant to the prevailing standard of proof) that he did in fact commit it. Failing that, they must find him not guilty. Many contributors here are conjecturing how confident jurors should be at the outset about defendant’s material innocence.

That is a natural enough Bayesian way of formulating the issue but I think it drastically misstates what the presumption of innocence amounts to.  In my view, the presumption is not (or at least should not be)  an instruction about whether jurors believe defendant did or did not commit the crime.  It is, rather, an instruction about their probative attitudes.wavy capital

There are three reasons for thinking this:

a). asking a juror to begin a trial believing that defendant did not commit a crime requires a doxastic act that is probably outside the jurors’ control.  It would involve asking jurors  to strongly believe an empirical assertion for which they have no evidence whatsoever.  It is wholly unclear that any of us has the ability to talk ourselves into resolutely believing x if we have no empirical grounds for asserting x. By contrast, asking juries to believe that they have seen as yet no proof of defendant’s guilt is an easy belief to acquiesce in since it is obviously true.

b). asking jurors to believe that defendant did not commit the crime seems a rather strange and gratuitous request to make since at no point in the trial will jurors be asked to make a judgment whether defendant is materially innocent. The key decision they must make at the end of the trial does not require a determination of factual innocence.  On the contrary, jurors must make a probative judgment:  has it been proved beyond a reasonable doubt that defendant committed the crime?  If they believe that the proof standard has been satisfied, they issue a verdict of guilty.  If not, they acquit him.  It is crucial to grasp that an acquittal entails nothing about whether defendant committed the crime,  What it focuses on is how strong or weak is the proof that he did so. Because their verdict decision is entirely a  question about whether guilt has been proven or not, the guilt-not-proven verdict leaves wholly unresolved the issue whether the defendant did or did not commit the crime.  Boastful claims to the press from defense attorneys about how their newly acquitted clients have been ‘exonerated’ or ‘vindicated’ are patently misleading.  What they should be proclaiming on the courthouse steps is something like: “There’s at least a 5-10% chance that my client didn’t commit the crime.” (Except in Scotland, ‘innocence’ simply does not figure among the verdict options open to Anglo-Saxon jurors.)

c). Legal jurisprudence itself makes clear that the presumption of innocence must be glossed in probatory terms.   Consider this model federal jury instruction:

The law presumes defendant to be innocent of all the charges against him. I therefore instruct you that the defendant is to be presumed by you to be innocent throughout your deliberations until such time, if ever, you as a jury are satisfied that the government has proven him guilty beyond a reasonable doubt.” US v. Walker (1988)

Bayesians will of course be understandably appalled at the suggestion here that, as the jury comes to see and consider more and more evidence, they must continue assuming that defendant did not commit the crime until they make a quantum leap and suddenly decide that his guilt has been proven to a very high standard. This instruction makes sense if and only if we suppose that the court is not referring to belief in the likelihood of material innocence (which will presumably gradually decline with the accumulation of more and more inculpatory evidence) but rather to a belief that guilt has been proved.

As I see it, the presumption of innocence is nothing more than an instruction to jurors to avoid factoring into their calculations the fact that he is on trial because some people in the legal system believe him to be guilty.  Such an instruction may be reasonable or not  (after all, roughly 80% of those who go to trial are convicted and, given what we know about false conviction rates, that clearly means that the majority of defendants are guilty).  But I’m quite prepared to have jurors urged to ignore what they know about conviction rates at trial and simply go into a trial acknowledging that, to date, they have seen no proof of defendant’s culpability.

Larry Laudan

*Currently a Professor of Philosophy & Law, The University of Texas School of Law

Among Laudan’s books:

1977. Progress and its Problems: Towards a Theory of Scientific Growth
1981. Science and Hypothesis
1984. Science and Values
1990. Science and Relativism: Dialogues on the Philosophy of Science
1996. Beyond Positivism and Relativism
2006. Truth, Error and Criminal Law: An Essay in Legal Epistemology

Categories: frequentist/Bayesian, PhilStatLaw, Statistics | 28 Comments

Post navigation

28 thoughts on “Guest Post: Larry Laudan. Why Presuming Innocence is Not a Bayesian Prior

  1. Larry: Thanks so much for this post! It greatly illuminates what I tried to say in my earlier comments on Schachtman, but more than that, it may suggest a better way to state a general point that I am constantly trying to convey, with mixed success; namely that assessing probativeness differs from assessing probabilities. The former is a matter of how good a job was done. The ‘rating’, if there is to be such a thing, has entirely to do with what was shown, how effective the performance was at ruling out the denial of a claim H (in this case, innocence). [One might separately ask about the warrant for H, above and beyond what was or wasn’t shown by a given body of evidence.]

    However, it isn’t so clear to me how to relate concepts from statistical tests to your remark that “There’s at least a 5-10% chance that my client didn’t commit the crime.” Perhaps something like this: the evidence presented, E1, E2,…En is not so inconsistent with innocence as to be practically impossible, under the assumption of innocence. Or: the set of coincidences that would have to be the case in order to have amassed the evidence they’ve presented, when in fact the person is innocent, is not so very tiny (not less than .05 say). A different kind of claim, which I don’t think you’d mean, is: at least 5% of the time, when this is the best evidence of guilt they can muster, the person is innocent. Or perhaps, “if we continually deemed persons innocent on the basis of this evidence, or evidence no better than this, we’d be wrong about innocence less than 5% of the time.

    Do any of these fit?

  2. Larry Laudan

    Deborah,
    The standard thinking among legal scholars which generated the words I put in the mouth of my archetypal lawyer is this: a). exoneration studies (of which there is now an abundance) reveal that there is a false conviction rate at trial of ~5%; b). that, in turn, is routinely construed as indicating that the de facto standard of proof is about 95% likelihood of guilt; c). leading to the supposition that a defendant will/should be acquitted whenever his apparent guilt is <95%. I don't endorse this reasoning but I believe it represents how a typical lawyer would approach the issue.

  3. Larry: First of all, sorry your comment didn’t show up til now: you hadn’t commented before; but now your comments will show up immediately.

    I’m wondering how the false conviction rate would actually be known, in general–aside from those cases garnering sufficient interest to reexamine (I will look into exoneration studies). But, in any event, that assessment would differ, would it not, from the criteria actually applied by a juror in summing up the evidence (to judge if guilt has been “proven” to the required standard). Nothing you’ve said is at odds with that in the least; in that sentence, you were describing what a defense lawyer could reasonably say,as opposed to the juror assessment of not guilty.

  4. Reblogged this on Not Knowing Things and commented:
    A philosophy of science approach to bayesian inference required by the justice system.

  5. Reblogged this on Епанечников блог and commented:
    “…the presumption (of innocence) is not (or at least should not be) an instruction about whether jurors believe defendant did or did not commit the crime. It is, rather, an instruction about their probative attitudes.”

  6. bayesrules

    I don’t see any incompatibility with what my Texas colleague Prof. Laudan writes and with what I wrote. The Bayesian prior of probable innocence and then the evidence that will be used to change that prior into a posterior is precisely to determine what the probative attitude should be when all the evidence is taken into account. The evidence that caused the authorities to arrest and then indict the person on trial is precisely what would be used by a jury that started from a prior that assumed probable innocence to its final posterior. A 1/N prior where N is the population is appropriate when you know nothing of the evidence (e.g., the police find a dead person whose identity has not been made known at that point). But this will quickly be changed towards guilt by nothing more than the fact that crimes require motive and motive usually resides with a much smaller population, e.g., people known to the victim, people who have already been in trouble with the law, and so forth. A 1/N prior will not survive for long in the face of such considerations.

    The problem I see is that the jury (trained as we all are in Bayesian thinking 🙂 has to avoid using the same evidence twice. But a jury that started with a belief that the accused is more likely guilty than innocent simply because of an indictment, and then is presented with evidence that brought the probability of guilt to greater than 50% in the eyes of the authorities (as will certainly happen) cannot use that evidence again to further raise the probability of guilt, for then they would be violating the rule that P(H|E,E)=P(H|E).

    Maybe there are ways around this problem, but the problem is there unless specific steps are taken in the process to prevent this rule from being violated.

    BTW, some of the comments on the previous thread were off-point. Prof. Schachtman specifically asked about the Bayesian situation, and comments to the effect that some frequentist approach is better, for example, do not answer his question.

    • Bayesrules: Schachtman was asking generally as to whether any Bayesian prior can capture the jury context, and I think the upshot of the answers he received points to no (but I’ll let him speak for himself). I think Laudan’s view is the right one: the concern is how well probed, not how probable*, guilt is shown. The former unlike the latter is also fairly easy to cash out. But on the business of using the same evidence twice: many Bayesians advocate or at least allow the data to be used in forming or modifying the prior (to obtain coherence in whatever way the agent prefers). Is that not using the data twice? In elicitation, for example, it is typical to use the data to get the prior which will figure into the Bayesian computation.

      *Although retrospectively, researchers might estimate the rates of false conviction, that is different from the criteria actually applied by the jury.

  7. Larry Laudan

    Deborah,
    You wondered how the error rate at trial could be ‘known’. It can’t but it can be estimated. Attempting to estimate, even approximately, the error rates in criminal trials in any given country is precarious. That is partly because, until the 1950s, neither lawyers nor criminologists were the slightest bit interested in collecting the pertinent data.
    While most of the exonerations data are too precarious to put much store by, both the left and the right have attempted to use them to further their own agendas. Justice Scalia has happily used the data about exonerations for those convicted of a capital crime to infer that false convictions are so rare as to be negligible. (“That would make the error rate [in felony convictions] .027 percent—or, to put it another way, a success rate of 99.973 percent.” Innocence project devotees, by contrast, have taken the existence of virtually any provably false convictions as a reason for inferring that they are only the visible tip of an enormous iceberg and that erroneous convictions have to be seen as a frightening, everyday occurrence.

    The best treatment of this issue can be found in a splendid article by Michael Risinger, appropriately titled: “Innocents Convicted: An Empirically Justified Factual Wrongful Conviction Rate.” (2007; downloadable from http://ssrn.com/abstract=931454).

    The take-away, at this point in time, is that of the perhaps fifteen studies of the frequency of false convictions in the literature there is none that produced an estimate higher than 5% (and most were vastly lower than that). (A summary of much of this research can be found in a paper of mine published in Andrei Marmor, ed., The Routledge Companion to Philosophy of Law (2012) and can be downloaded from http://ssrn.com/abstract=1815321)

    It is more than mildly ironic that US courts, even since Daubert, have had the power to exclude expert testimony when the expert in question could not specify the error rates of his methodology while the courts themselves are blissfully ignorant of their own error rates.

    • Larry: Thanks so much for the links. I will read your article, and try to convince you to give me a distinct post some time on that topic alone.

      On the Daubert point, I hadn’t realized they required a formal error rate of experts. But of course they can say the determination of cause under Daubert differs from legally determining guilt! Or something like that. Maybe Schachtman will weigh in on this, as he’s a Daubert expert.

  8. Christian Hennig

    I may be missing something but my impression is that the issue raised by Schachtmann is much more serious than the issue raised here, at least if the “presumption of innocence” is interpreted in the way that, using a Bayesian approach, it just requires the cutoff value for the posterior probability of guilt to be very high, 95 or 99%, say, to suffice for conviction. I would then follow Larry argueing that the prior should be left alone by this presumption because it is not about belief but about how strong evidence is required for “beyond reasonable doubt”, and that’s it. The Bayesian can be happy with this. (Ignoring for a moment that coming up with a suitable prior still is a mess.)

    • bayesrules

      I believe a better way to do this is to put it the problem in the context of decision theory. The loss function accounts for the “beyond a reasonable doubt” part of the discussion by explicitly stating how much greater the loss is if an innocent person is unjustly convicted than if a guilty person is acquitted. That is where the 95% or 99% calculation properly goes. For example, if the loss for the action and state-of-nature pair (convict, innocent) is taken to be 99 and the loss for (acquit,guilty) is taken as 1, and the other two pairs (convict, guilty) and (acquit, innocent) are taken as 0 (that is, a correct decision entails no loss, and the loss for an incorrect decision in favor of a guilty person is taken as 99 times less than the loss for an incorrect person against an innocent person), then if the posterior probability of guilt is greater than 0.99 then the accused will be convicted.

      (In the above, I mean by “guilty” and “innocent” the actual states of nature, and by “acquit” and “convict” the actions of the jury.)

      The prior determines, before any evidence is considered by the jury, what the probability of guilt is in the absence of that evidence, so that the posterior probability of guilt is, in the usual Bayesian way, actually measured by prior*likelihood.

      That cleanly separates the two issues, and allows each to be considered in its own right. Mixing the two up by trying to combine both in the prior is bound to lead to incoherence.

      • Christian Hennig

        I don’t know much about law but I’d have thought that the law doesn’t allow to have “guilty beyond reasonable doubt” dependent on the loss assigned to (acquit, guilty), which may strongly depend on what crime we are talking about and may in some cases lead to convinction of somebody with a not so high probability of guilt!? (If you propose to choose the losses as constants independent of the crime, it would boil down to have a fixed posterior probability cutoff, no?)
        I’d interpret “beying reasonable doubt” to refer to a probability, not a loss.

        • bayesrules

          In my classes, the students generally consider the seriousness of the crime and the penalty when deciding on a loss function. In the case of a misdemeanor, the cost of making a mistake is not nearly as high if the action is convict and the state of nature is innocent than in the case of a capital crime, because the penalty exacted is not as severe. Here in Vermont, which is not a capital punishment state, but where a recent case was brought by the Bush administration as a capital crime in federal court (a kidnapping), my students generally choose an infinite loss in the case where the action is (convict and exact the death penalty, innocent), so that the death penalty would never be applied according to this calculation. But when I was teaching at the University of Texas, my students sometimes chose loss functions that would result in the death penalty if the posterior probability of guilt was sufficiently high. Generally speaking, if the outcome (convict and exact life in prison, innocent) is provided as a third possibility, then the structure of the resulting loss function as determined by my classes has been such that the death penalty would never be applied.

          The key here is that the action part of the loss function implicitly includes the severity of the penalty as part of the action, and therefore different crimes would naturally result in different losses and hence different posterior probability of guilt would be needed for conviction in different cases.

          But the point is that by cleanly separating the issues of probability of guilt from the actions of penalty/acquittal, a decision-theoretic approach allows a principled approach to this problem.

        • bayesrules

          Let me also note that the law itself doesn’t say what is meant by “guilty beyond a reasonable doubt.” That is also up to the jury in any individual case, and clearly then what is meant by that will not be constant from trial to trial.

          • Bayesrules: I think (or hope) that what you say is incorrect. The severity of the penalty should vary with the crime, and the assessment of BARD should remain constant (at least the minimal hurdle). An individual defendant should have a right to have her case evaluated on its own merits—on how good a job was done in showing guilt BARD (or whatever the standard). The burden of proof for the crime, of course, reflects judgments which may well reflect on a vague societal determination of average losses, but once the burden is set, it seems to me, it’s a matter of whether the burden was met in the case at hand. Or, at any rate, I think it ought to be….

            • bayesrules

              Well, I am pretty sure that there’s no law that says exactly or even approximately what “beyond a reasonable doubt” means (Prof. Laudan or Prof. Schachtman can correct me if I am wrong, I’m not a lawyer either). I don’t think that the jury instructions that the judge writes are going to say “95%” or “99%”, for example. About all we know is that it is “substantially more” than a preponderance of the evidence standard (e.g., >50%).

              As for the minimal level of evidence required, I think that the risk of executing an innocent person in a death penalty case must be set very, very low in fact. A 1% chance of executing an innocent person is unacceptably high for me, whereas a 1% chance of sending an innocent person to prison for life is perhaps acceptable; that is a mistake that has a chance of being remedied. For reasons like this, although I agree that the severity of the penalty should vary with the crime, I also think that it’s unacceptable to set a constant level of burden of proof, regardless of the seriousness of the penalty, since the injustice for convicting an innocent person increases as the penalty exacted increases.

              The Innocence Project has in fact found numerous examples of wrongful convictions, in excess of what I would regard as just. Our system of law cannot regard as acceptable such egregious miscarriages of justice as it has documented.

            • bayesrules

              Let me add that as a matter of long-standing precedent, since the 17th century Bushel’s Case (http://en.wikipedia.org/wiki/Bushel's_Case), juries have had the common-law right to judge both the law and the evidence, although judges and lawyers have tried to suppress juries’ knowledge of this right, and judges try to nullify it by exacting an oath from prospective jurors that they will adhere to the law as the judge states it. This right is, however, enshrined explicitly in the case of at least two state constitutions (Texas, and I believe also Indiana). William Penn even appealed to the Magna Carta in this case (he was one of the accused).

              If juries have the right to judge the law, then I think it follows pretty clearly that they have the right to judge what the level of proof should be in any given case, and since a petit jury only sits on one criminal case, that the level of proof they apply will vary from case to case. Which means that there’s not going to be a constant level of proof like “95%” or “99%”, and that juries, it seems to me as a consequence of this, have the right to consider the severity of the penalty as well as the injustice of applying that penalty when someone may actually be innocent.

              • Corey

                Prof. Jefferys: I was originally sympathetic to Mayo’s point of view that BARD standard should be invariant to the potential punishment — after all, on its face it makes no reference to consequences. But you make a strong point regarding the de facto, thoroughly embedded-in-the-system power of juries to do whatever the hell they want. It makes me reflect that perhaps the word “reasonable” in BARD encompasses consequences of erroneous findings.

              • If what Bayesrules just said is true, then indeed anything goes! I’ve never heard of this though. Imagine the shock of jury selectors (for the prosecution) discovering that they seated someone who rejects the law entirely. (Maybe that’s their first question.)

                • Corey

                  Mayo: There’s a general prohibition against punishing juries for their verdicts, which means that judges’ instructions about verdicts can’t actually be enforced. So even though most juries never find out, they have the power to find any defendant, even one who is clearly guilty, not guility. But lawyers swear an oath to uphold the law, and this is generally held to prevent them from informing juries that they have this power. I am not a lawyer, or even an American, so I wonder what Nathan Schachtman thinks of all this.

  9. Nathan Schachtman

    I have been traveling and just rejoining the conversation. I am a bit like Danny in the Big Lebowski; I have come into the movie late, and I want everyone to stop and explain what’s going on. Mayo is correct that I started off by identifying why I thought a Bayesian analysis was problematic, and I suppose I ended our discussion still believing that such analysis was problematic. In her comments to Prof. Laudan, Mayo drew a distinction between probativeness and probability, a distinction that seems at home in legal contexts, and which I believe would avoid some problems in Bayesian modeling of the litigation process.

    In any event, I don’t disagree with Prof. Larry Laudan’s characterization of the issue that I had posed to Deborah Mayo in an email, and which then became the focus of discussion in the earlier post. I had framed the problem in Bayesian terms because I have seen various commentators do so, and because I thought the formulation was problematic, for the reasons I stated.

    Prof. Laudan offered 3 reasons for claiming that the presumption of innocence is about probative attitudes, and not about adopting a belief in actual innocence. The first is that the presumption here is nothing more than acknowledging that the jurors have seen no evidence of guilty. In some cases, it is more, and only jurors who are prepared to state, under oath, that they can put whatever evidence they have seen aside, and indulge the presumption of innocence, will win a seat on the jury.

    The 2d reason given is that the presumption of innocence is not about actual belief that the defendant is innocent because the jurors will never be asked to declare whether the defendant is innocent, only whether they believe him guilty. The alternative not guilty verdict means at a minimum that there was reasonable doubt that kept the jurors collectively from reaching a judgment of guilty beyond a reasonable doubt. Laudan presents the interpretation of a not guilty verdict as “There’s at least a 5-10% chance that my client didn’t commit the crime.” [And this of course makes the assumption that PBARD involves a 90% level of certainty in the guilt propositions.] But this interpretation is not generally true. There are some cases in which the defendant has the burden of proof on a key issue, say insanity, and the jury’s verdict means that the defendant was not guilty of murder (which requires a certain mens rea), more likely than not.

    Even without an affirmative defense, I am not persuaded that a not guilty verdict entails nothing about whether the defendant committed the crime (that is, committed the act with the requisite mental state). The not guilty verdict entails at least that the jury had a reasonable doubt about whether the defendant committed the crime. That is not nothing, and in a given case, it might be much more. You would have to ask the jury, which of course, in some states, you can do. If the prosecution turned on one key witness, who was impeached up and down the Hudson, shown to be a liar and a fraudfeasor, and then contradicted by the defendant, who took the stand, then after the jury acquits (say in 5 minutes), the defense lawyer would be perfectly correct to tell the media after the trial that the jury believed the defendant to be innocent. [This would seem especially true if either the witness or the defendant were telling the truth; there was no other plausible explanation before the jury; and the jury voted “not guilty.”]

    As for the seriousness of the crime playing a role in the loss function analysis, I have doubts here as well. The jury is often in the dark about what the consequences of conviction will be. In some states, in capital murder cases, the same jurors will sit to decide on whether the death penalty will be given. Otherwise, the jury doesn’t know what the sentencing consequences will be. Many jurors would be disappointed or upset that the consequences were too lenient or too stringent. Some trials involve charges of multiple crimes, of varying degrees of criminal offense. (As in Florida v. Zimmerman, murder 2, and manslaughter). There will be one judicial instruction of what beyond a reasonable doubt means, and the judge will tell the jury to apply it to all charges that are submitted to them.

    A quick point on the error rate factor as it pertains to Federal Rule of Evidence 702 (Daubert v. Merrill Pharms). Very few cases have actually relied upon an unknown error rate to exclude an expert witness’s opinion testimony. [No cases have looked at an expert’s track record to assess whether his opinion in this case is inadmissible based upon his having endorsed various quack opinions in the past. Maybe they should, but we usually do not have the definitive answer to a controversy. I could give some examples where I have collected evidence of an expert witness’s high error rate with respect to his causal or diagnostic conclusions.] In civil cases, some defense counsel have attempted to claim that p-values > 5% mean an error rate too high for the opinion to be acceptable, but this involves a confusion between random error in a given study and the error rate in the expert witness’s opinion. If we were to accept John Ioannides’ assessment of observational epidemiologic studies, and consider the role that such studies have in many toxic tort cases, we would have to accept that the error rate is likely very high in such civil cases. I know that Prof. Greenland has dissented from Ioannides’ assessment, but that only illustrates that there is no real agreement on this issue. In any event, I believe that the “testable” and “tested” factors of the Daubert decision have gotten much wider play, and the important questions for the judicial gatekeeper are whether the testing was designed, conducted, and interpreted in scientifically appropriate ways.

    Nathan

    • Nathan: Thanks for your detailed comments. Just to record a general concern, of which I’m sure you’re aware, there are a lot of distinct things being referred to in these comments regarding terms like error rates. (1) In the case of the jury, (a) how to capture the stringency of the argument or standard for a ‘not guilty’ determination is quite different from (b) an estimate of erroneous verdicts in some population of jury decisions.

      (2) In the case of the expert, there’s a big difference between (a) the formal error probabilities associated with the methodology in the expert’s particular report (for instance you mention p-values) and (b) various (alleged) estimated rates of false reports circulating e.g., in journals, based either on observational or experimental studies. I take it that Laudan was referring only to the former, (a) (as regards Daubert requirements). Note that the estimates in (b) do not themselves appear to result from methods/studies with anything like clear error rates (Greenland regards them as based on cherry-picking).

  10. Nathan Schachtman

    Deborah,

    Yes; there are different error rates, and Justice Blackmun, in writing the majority opinion in Daubert, did not really spell out what he meant. I adverted to the Ioannides-Greenland debate because it involves not p-values but errors in both the magnitude and the directions of associations.

    As for the jury nullification and juries’ use of variable standards in applying proof beyond a reasonable doubt, it is true that many important decisions are lost in the “fog of the jury room.” Juries once had the acknowledged power to interpret and nullify the law. When Chief Justice Marshall round circuit to try cases, he charged juries that they should follow the law as well as their consciences. Marshall’s approach has not been followed for a long time in the United States, and I can confidently say that juries here no longer have a right to disregard the judge’s instructions on key legal instructions. (Of course some legal instructions require the jury to fill in their judgment, as when a judge instructs that a defendant’s action may be excused by reason of self defense if he was in a reasonable fear of serious bodily harm. The jury must fill in, from their own understandings and experiences, what reasonable means in this context.)

    Nathan

  11. Nathan Schachtman

    Corey,

    Your understanding is essentially correct, with some qualification. Courts cannot punish juries for finding a guilty defendant not guilty or vice versa. If the courts think no reasonable jury should convict, then they will direct a verdict of acquittal (or earlier in the proceedings, refuse to bind the defendant over for trial). The jury’s power to acquit someone obviously guilty is not controlled directly, but certainly a rational (but criminal) defendant in that position would generally be reluctant to take the chance of pulling a jury willing to disregard the judge’s instructions. Still, it happens.

    As for punishing the jurors, there are now examples of jurors being held in contempt and drawing jail time and fines for disregarding instructions about using social media, or about doing independent research on the internet about the facts or the law of a case. Typically, another juror drops the dime on the infringing juror. So there is some (although admittedly incomplete) control over the jury process. Some cases are tried to the bench, both criminal and civil, and those cases require the judge to articulate correctly the rule of decision, and its application to the facts of the case. Thus a judge’s fact-finding and application of law to fact are subject to much greater control, both in terms of appellate review and critical academic and lay commentary.

    Nathan

    • Corey

      Thanks, Nathan. My “whatever the hell they want” comment should have had the qualifier “with respect to the verdict”; I was aware that there are lots of acts that can get a juror sanctioned. I was also aware that not only can a judge direct a verdict, he or she can also overrule a guilty verdict by issuing a judgment notwithstanding the verdict. But if a jury issues a verdict of not guilty, that seems to be pretty final. So it seems that in the last analysis, the standard of proof BARD is whatever each individual jury thinks it is.

      • Corey: So it seems in the last analysis that you’ve entirely missed the point of the post, that presuming innocence is not a Bayesian prior!

  12. Corey

    Mayo: Say rather that I don’t disagree with the point of the post, so I’ve been discussing tangential issues. (It seems to me that the presumption of innocence was designed to counteract confirmation bias, thereby acting as a check on the power of the state.)

    • Corey: OK, you don’t disagree with the post but have been discussing tangential issues.

I welcome constructive comments that are of relevance to the post and the discussion, and discourage detours into irrelevant topics, however interesting, or unconstructive declarations that "you (or they) are just all wrong". If you want to correct or remove a comment, send me an e-mail. If readers have already replied to the comment, you may be asked to replace it to retain comprehension.

Blog at WordPress.com.