You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

Convictions

The Origins of Reasonable Doubt: Theological Roots of the Criminal Trial
By James Q. Whitman
(Yale University Press, 276 pp., $40)

I.

To be convicted of a crime in our courts, a defendant must be proved guilty beyond a reasonable doubt. This rule is both fundamental to the criminal justice system of the United States and uncontroversial. To punish an innocent person is more costly than to acquit a guilty one, since convicting an innocent person imposes heavy costs of punishment on him and on the criminal justice system (the cost of administering the sentence), while the main consequence of acquitting a guilty person is merely to reduce, probably slightly (unless such acquittals become very common), the deterrent and incapacitative effect of the criminal law. In the criminal justice arena, in other words, a false positive--convicting the innocent--is usually more costly than a false negative--acquitting the guilty. By putting a heavy burden of proof on the prosecutor, the law shifts the ratio of false positives to false negatives in favor of the latter; it protects the innocent by making it difficult to convict a person on weak evidence.

But that is a static analysis. In dynamic terms, the requirement of proving guilt beyond a reasonable doubt need not increase the percentage of guilty defendants who are acquitted (the false negatives). For by forcing the government to invest more resources in the trial of a criminal case in order to prove guilt beyond a reasonable doubt, the requirement increases the accuracy of criminal trials: fewer innocent are convicted (provided the government adequately finances the defense of indigent defendants), and fewer guilty are acquitted. The reason is simply the greater investment in the process of litigation.

James Whitman, a legal historian at Yale Law School, claims in his new book that the reasonable-doubt rule is unrelated, in its historical origins and in its present-day usage, to any concern with accuracy in the determination of guilt, or even to the protection of the innocent. The concept of reasonable doubt, he argues, is designed solely to make judges and jurors comfortable with convicting people. It is a vestige of a vanished era in which "moral comfort" was important to people involved in the criminal justice system. Whitman wants the United States to jettison the rule, along with the rest of the Anglo-American (common law) apparatus of criminal justice, and adopt the criminal justice system of continental Europe instead.

He defends this bold and radical thesis by exploring the origins of the concept of proof beyond a reasonable doubt. Displaying great learning, Whitman narrates the following tale. Until the nineteenth century, though with diminishing frequency, European nations, including Britain, relied heavily on "blood punishments"--punishments, mainly execution and mutilation, that involved the shedding of the punished person's blood. (Sometimes it was metaphorical blood; hanging was treated as a blood punishment.) Yet in the early centuries of the Christian era, shedding the blood even of a guilty criminal, or of an enemy in a just war, was considered morally problematic, on theological grounds. If the person whose blood was shed was innocent, then all who were complicit in his punishment--accuser, judge, and witnesses--were guilty of mortal sin, and therefore likely to be damned. In contrast, a refusal to accuse or to bear witness against, or to vote to convict, a guilty criminal did not court damnation, because no one's blood was shed. Hence the "safer way" for a person potentially involved in a criminal proceeding on the side of the prosecution was to avoid doing anything that would result in a conviction, since there was always some possibility that the convicted defendant was innocent.

But a system of criminal justice in which no one was convicted would have been intolerable. So theologians, jurists, and other intellectuals of late antiquity and the Middle Ages cast about for means of assuring judges, accusers, and witnesses that they would not risk being damned merely by having contributed to the imposition of a blood punishment. One device these thinkers hit upon was "agency denial": the idea that the judge (and the other participants in the criminal justice system) who strictly adhered to the procedures prescribed for a criminal trial would not be an agent (that is, a doer) of the defendant's punishment. In Augustine's words, "When a man is killed justly, it is the law that kills him, not [the judge]."

Another device for giving judges and witnesses moral comfort plays a central role in Whitman's narrative: trial by ordeal. A defendant might be forced to grasp a heated iron for several seconds; if he had a normal recovery from the wound he was acquitted, but if the wound failed to heal properly he was convicted. Which outcome he experienced was thought to be determined by God, so responsibility for imposing blood punishments was shifted heavenward from the human judge and witnesses. Trial by ordeal was a last resort, reserved for cases in which the human judge could not determine guilt or innocence with confidence. But such cases were common, because of the reluctance of witnesses to testify; they feared damnation should their testimony result in the blood punishment of an innocent person. For then it would be not the judge who killed, or the law that killed, but the witness who killed.

The ordeal was an unsatisfactory method of determining guilt. In particular, it resulted in the acquittal of many guilty people (assuming the wound inflicted by a heated iron usually healed). Any notion that God might want a lot of guilty people to be acquitted was unacceptable. There was even a theological objection to shifting responsibility for doing temporal justice from human beings to God: it was an imposition on God.

Eventually the church forbade its priests to bless the ordeal, and so the secular authorities had to cast about for a replacement, something that would enable guilt to be determined by human judgments on evidence. The replacement, which was worked out in the twelfth and thirteenth centuries, the period in which the seeds of the modern concepts of criminal justice were sown, took different forms on the Continent and in England. Continental European jurists downplayed the role of the witness as a source of evidence (for it was hard to get witnesses to testify, owing to their fear for their souls) by empowering the judge to investigate the facts and to order torture if that was necessary to extract a confession that would confirm the defendant's guilt and thus provide the judge with moral comfort. Though coerced, these confessions were thought to be reliable because it was impermissible to torture a defendant unless there was other evidence of his guilt.

Modern continental European criminal procedure retains the "inquisitorial" approach (in which a trial is less a contest than a cooperative endeavor to verify the results of a pretrial investigation by judicial or law enforcement authorities), minus torture. Whitman regards the approach as better for determining facts accurately than the common-law approach--an adversary system in which the trier of fact, whether judge or jury, plays an umpire-like role rather than investigating the facts, which is left to the lawyers to do.

The common-law response to the abandonment of trial by ordeal involved shifting responsibility for determining guilt from the judge (who now would mainly preside) to a twelve-man lay jury. Whitman does not explain who was eligible for jury duty or how jurors were selected--an unfortunate omission--but he emphasizes that originally the jurors were conceived of as witnesses and investigators rather than merely as triers of fact (though they were that, too). They were picked from the village or neighborhood where the crime had occurred and were therefore presumed to have direct knowledge of an accused's guilt or innocence, which the royal judges--who, although they circulated throughout the country hearing cases, lived and worked mainly in London--were unlikely to have. Whitman argues that the reason the English were able to coerce people to serve on juries despite the peril to the jurors' souls, whereas the continental Europeans instead created a judicial bureaucracy to minimize reliance on witnesses, was that English monarchs, during the crucial period of transition from the ordeal to proto-modern methods of criminal procedure, were more powerful than their continental counterparts.

The English approach gave the judge moral comfort, but it put the jurors in the hot seat. Since they were expected to decide a criminal case on the basis of their personal knowledge of the defendant's guilt or innocence as well as the testimony of (other) witnesses, they could not say that "the law" dictated a guilty verdict, and so they were in peril of damnation if they rendered such a verdict. This led to many acquittals of the guilty, just as the ordeal had done. At first the judges responded by trying to coerce jurors to return guilty verdicts in cases in which the judge was convinced of the defendant's guilt; if the jurors voted to acquit, the judge might order them to reconsider their verdict on pain of a fine or imprisonment if their reconsideration did not result in a guilty verdict. This practice was obviously unsatisfactory (it also, though Whitman does not make this point, placed the judge in moral jeopardy) and it was gradually abandoned, along with the juror's dual role as witness and judge. In what became the rare case in which a juror had evidence bearing on the defendant's guilt or innocence, the juror was required to testify as a witness under oath, so that he could be cross-examined.

It was not until the 1780s that judges began instructing juries to convict the defendant if convinced of his guilt beyond a reasonable doubt, and otherwise to acquit. Whitman argues that the American Revolution, by interrupting the practice whereby thieves and murderers were transported to the American colonies in lieu of being executed, increased the frequency of blood punishments and thus the risk of excessive acquittals as a result of jurors' moral anxieties. (Actually, thieves and murderers continued to be sentenced to transportation during the Revolution, and were imprisoned in hulks on the Thames against the time when a destination could be found.) The reasonable-doubt instruction was designed to provide moral comfort, albeit in tempered form, because jurors were not to take the "safer way" of acquitting if there was any doubt of guilt, but only if the doubt was reasonable.

So, Whitman argues, requiring proof beyond a reasonable doubt has nothing to do with improving the accuracy of criminal trials: "It is a fossil, a misconstrued fragment of the Christian past....The underlying concern was not with protecting the defendant at all. It was with protecting the jurors." He adds that it would be a useful concept if we could somehow restore "the old conviction that judging and punishing are morally fearsome acts," but that if our concern is with the adequacy of the criminal trial as a method of determining guilt and innocence accurately, we should give up not only the reasonable-doubt rule but also the rule that the jury must be unanimous to convict (or acquit), along with the various rules that exclude probative evidence from criminal trials. He also believes that we should adopt the continental European system of criminal procedure. The only reason we cling to the common-law approach, Whitman argues, is the common law's inveterate conservatism.

II.

Whitman makes a convincing case that the desire to give moral comfort to judges and others involved in deciding whether to impose blood punishments influenced the development of criminal procedure in the middle ages. But his further argument that the reasonable-doubt rule is in origin a moral-comfort rule, and in practice is unrelated to factual proof, is unconvincing. Five and a half centuries intervened between the rejection of the ordeal by the Fourth Lateran Council and the emergence of the rule. Of course, if fear of damnation had been increasing during this period, if blood punishments were used increasingly, and if the reliance of judges and jurors on private knowledge rather than on evidence provided by witnesses was increasing, one would understand a growing demand for a method of providing moral comfort, since the supernatural moral comfort provided by the ordeal (God taking punishment out of human hands) was no longer available. But the historical and cultural trends were in the opposite direction. Fear of damnation declined; blood punishments declined with the rise of transportation as an alternative to execution and mutilation; and jurors and trial judges became judges of facts rather than sources of facts, and as long as they just executed the law they were in no danger. The reasonable-doubt rule, if it really is a moral-comfort rule, came along when it was no longer needed.

We are now 225 years further along. Fear of damnation has continued to decline. (In England it has virtually disappeared--only 35 percent of the population believes in God--but English juries are still given the reasonable-doubt instruction.) Blood punishments have disappeared completely in England and almost completely in the United States. I would wager than not one American in a hundred thousand has even heard the term "blood punishments." Yet there is no movement to stop instructing jurors that to convict they must find guilt beyond a reasonable doubt.

The alternative hypothesis to Whitman's moral-comfort theory, propounded by John Langbein, another legal historian at Yale Law School, is that the reasonable-doubt standard emerged when it did because lawyers were playing an increasingly large role in criminal cases. This resulted in greater attention to legal formalisms, such as burden of proof, but also in a more adversary litigation process, in which lawyer trickiness and inequality of wealth created a greater danger of convicting the innocent.

Whitman rejects Langbein's explanation (which is admittedly speculative) in part because Whitman thinks that in the pre-modern era, when most people lived in villages or small neighborhoods and had limited privacy, "really difficult factual puzzles were much less common than they are today." The guilty party was almost always known, so procedural rules, and eventually the reasonable-doubt rule, must have been about something other than establishing where the truth lay. But he provides no evidence for this suggestion, which is not at all self-evident. There was less privacy, it is true, but there was also much less investigative machinery.

And if Whitman is right that there was never any doubt about guilt, why did judges, jurors, and witnesses need moral comfort? There had to be factual uncertainty to make moral discomfort a concern for triers of fact. Of course, there is always metaphysical doubt, which is the sort that fascinates philosophers. It takes such forms as asking for proof that one is not a brain in a vat being fed false impressions of an external world by the scientist who controls the vat. And in most real cases, even when guilt is clear, there are various loose ends, which defense lawyers tug on in an effort to create doubt in jurors' minds. It thus makes epistemic sense to tell jurors that only "reasonable" doubt, the sort of doubt that affects a person's behavior (as doubt about the existence of the external world does not), warrants voting to acquit. It is only when a juror votes to convict someone who he reasonably doubts is guilty that he is likely to feel qualms, or, in an earlier era, fear damnation.

In arguing that there was a time when guilt was always clear, Whitman not only undercuts his own thesis but also embraces an impoverished conception of what facts are material to guilt or innocence. Suppose it is conceded that X hit Y with a club and killed him. That does not mean that X is guilty of murder. He may have been swinging the club at a rabid dog and hit Y by accident. He may have killed Y in self-defense. He may have erroneously but reasonably believed that Y was attacking him. He may just have tapped Y lightly, even playfully, with the club, but unbeknownst to him Y had an infected scalp and died of blood poisoning. The most elementary who-hit-whom facts of a case must be characterized with reference to the defendant's understanding or intentions before an inference of guilt can be drawn. Whitman overlooks this rather elementary point.

His handling of a rule related to the reasonable-doubt rule--that the jury's verdict be unanimous--is also unconvincing. He considers it another device for giving the jurors moral comfort: "If we all vote to convict, none of us is fully responsible." The opposite is true. The requirement empowers each juror to prevent a guilty verdict, and thus makes each member of a jury that is unanimous to convict complicit in that decision. Whitman's mistake is surprising, because he argues within a few pages that the practice of giving a blank cartridge to one member of a firing squad (of course without disclosing which member) is a moral-comfort device, since then no member of the firing squad knows that he shot the condemned person. That is the opposite of the situation in which each juror knows that his vote to convict was essential to the conviction. They cannot both be moral-comfort devices.

Whitman notes that continental European juries (when juries are used in continental courts, which is rarely) need only a supermajority, and not unanimity, to convict. He prefers such an approach because "there is no reason to suppose that an uncertain fact is more securely established because twelve out of twelve laypeople agree on it, rather than nine out of twelve, or ten out of twelve" (and here he repeats that the real purpose of the requirement of unanimity is to give the jurors moral comfort). But requiring unanimity compels jurors to deliberate longer because every member of the jury must be convinced before a verdict can be rendered.

Whitman stumbles again when he tries to draw lessons for the present from his narrative history. He thinks that the evaluation of a modern legal concept requires the exploration of its historical origins. That is often the case when the concept no longer makes sense. And it may no longer make sense because it is indeed a fossil, a remnant. And digging up fossils and remnants contributes to historical knowledge--but history is not a good enough reason for historicism. A rule need not be obsolete just because its original purpose has no modern significance. (To think otherwise is to embrace the genetic fallacy.) It may be a matter of new wine in old bottles, as emphasized by Holmes in The Common Law. Consider the deodand. In early law, an inanimate object that killed a person, for example the runaway wheel of a cart, or a falling tree, was treated as a criminal: tried, convicted, and executed (that is, destroyed). This is as obsolete a legal notion as one can imagine. And yet when a ship causes damage in a collision, the victim can file suit against the ship, just as if it were a deodand. The reason, Holmes explained, is that treating the ship as a defendant enables a local court to exert jurisdiction over the case, thus sparing the victim from having to bring suit against the ship's owner, who might be thousands of miles away.

Or, coming closer to home, think of the modern function of requiring a witness to swear to tell the truth, the whole truth, and nothing but the truth, so help him God. (He can substitute an affirmation for an oath, though few witnesses do so.) No experienced participant in the modern criminal trial believes that a fear of divine punishment induces witnesses who would otherwise lie to tell the truth because they have sworn to God to do so. Rather, the oath provides a solemn warning that a lie will subject the witness to the possibility of being punished for perjury.

So even in the unlikely event that the requirement of proving guilt beyond a reasonable doubt was introduced in the 1780s as a device for giving jurors moral comfort without precluding the conviction of plainly guilty defendants, it may--in fact it does--serve a valid epistemic function today. It does this, to repeat, by weighting false positives (convictions of the innocent) more heavily than false negatives (acquittals of the guilty), but not much more heavily, which would be the effect of requiring jurors to be certain of the defendant's guilt before they could vote to convict him.

Unless Whitman believes that no legal practice can be sound if it has a tainted pedigree, I do not understand his animosity toward the reasonable-doubt rule. The only non-genealogical point he makes against it is that judges generally refuse to give jurors a definition of "reasonable doubt." That is true, but it is not a proof of incoherence. (And if the rule is incoherent, how could it ever have provided moral comfort?) Many words and phrases, though used without difficulty or ambiguity, cannot be made clearer by being defined; try defining "time." In the typical criminal trial, the prosecution puts on its case and then the defense lawyer tries to sow doubts in the jurors' minds and then the prosecution ripostes by trying to persuade the jurors that the doubts are unreasonable and so should not shake the jurors' confidence in the defendant's guilt. The notion of proving guilt "beyond a reasonable doubt" is part of popular culture as well as legal culture, and jurors know that it means they must have a high degree of confidence in a defendant's guilt before they can convict him, though they need not be absolutely certain of his guilt.

Whitman says that the continental European legal systems have avoided confusing moral-comfort doctrines with proof-of-guilt doctrines, and that their methods of determining guilt and innocence are superior to ours. But he says so little about those methods that the reader cannot evaluate his claim. He does not explain, for example, what continental jurors are told, if anything, about the degree of doubt that requires an acquittal. He does not mention the French rule of intime conviction, whereby a judge (or a juror) is not to convict unless he feels sure of the defendant's guilt.

Whitman is critical of rules that exclude some types of evidence from being presented to jurors, and he implies that such rules exist only in common-law systems. He does not specify which exclusionary rules he disapproves of. Surely not all: does he think that illegally seized evidence, coerced confessions, fourth-degree hearsay, or evidence obtained in violation of the attorney-client, marital, or doctor-patient privilege should be freely admissible in criminal trials? Or that European criminal justice systems exclude no evidence from trials? He does not say; but in fact European courts do exclude some types of evidence, such as coerced confessions, though not as many types as common-law courts do.

The important point is that even if Whitman had proved his historical thesis--that the requirement of proof beyond a reasonable doubt originated as a moral-comfort rule--this would not be evidence that the continental European system of criminal justice is superior to the Anglo-American system. The success of right-wing originalists (such as Antonin Scalia) in shaping the terms in which issues of constitutional law are debated at the judicial level, and of left-wing originalists (such as Bruce Ackerman and Akhil Reed Amar) in shaping the terms of the debate at the academic level, has created an exaggerated sense of the normative force of origins. Originalists are like genealogists who study the pedigrees of soi-disant aristocrats for flaws that will unmask them as impostors. Such a process can liberate us from thralldom to tradition, but that is about all that it can do. It certainly cannot undermine a legal doctrine that can be justified by reference to contemporary needs rather than to remote origins. 

Richard A. Posner is a judge on the U.S. Court of Appeals for the Seventh Circuit. This article originally ran in the February 27, 2008, issue of the magazine.