By: Beth Schwartzapfel, a Brooklyn freelance journalist with an interest in criminal justice issues.
Warren Horinek did not murder his wife. That’s what he said, that’s what the medical examiner said, that’s what the homicide sergeant said.
Even the district attorney’s office in the Horineks’ hometown of Ft. Worth, Texas, agreed that he was innocent—not something a Texas prosecutor typically says. But when Bonnie Horinek died in 1995, her parents refused to believe what the evidence strongly suggested—that Bonnie shot herself—and instead they enlisted the services of a blood-spatter analyst to prove that it was their son-in-law who had killed their daughter.
The spatter analyst zeroed in on the blood-soaked T shirt Horinek was wearing when the paramedics arrived. To him, the fine spray of blood on Horinek’s left shoulder was not from administering CPR, as Warren said it was, and as the 911 recording seemed to indicate, but from shooting Bonnie at close range. On the basis of that testimony, Horinek was convicted of murder and sentenced to 30 years. But did they really get their man? Horinek’s lawyers have filed a writ of habeas corpus to try to have him released; much of the spatter analyst’s testimony, the lawyers argue, “was contrary to known and accepted science.”
In the age of CSI and Dexter, we’re led to believe that forensic science is a high-tech discipline, powerful and sophisticated enough to catch any criminal.
As it turns out, whether blood-spatter analysis and disciplines like it qualify as “science” at all is a matter of increasing debate. In a sharply critical report issued in 2009, the National Academy of Sciences said, “The simple reality is that the interpretation of forensic evidence is not always based on scientific studies.” Taking aim at disciplines as varied as ballistics, hair and fiber analysis, bite-mark comparison—even fingerprints—the report, cited by Horinek's lawyers, declared, “This is a serious problem.”
The last few years have seemed to bear out the report. Dozens of elite crime labs all over the country, from Nassau County, N.Y., to San Francisco, to Virginia, Cleveland, Oklahoma, and Baltimore, have been involved in scandals involving mishandled evidence and false or misleading forensic testimony. This past summer, a North Carolina attorney general’s audit discovered that the state’s Bureau of Investigation had withheld or distorted evidence in more than 200 cases.
Even some of the best funded and most sophisticated crime-fighting organizations are being taken to task for their use of forensic evidence. This week, the New York Times reported that the Federal Buerau of Investigation had “overstated the strength of genetic analysis” during the investigation of Bruce E. Ivins, who allegedly mailed anthrax to newsrooms and Senate offices in the wake of the 9/11 attacks.
year-long investigation by the independent journalism nonprofit ProPublica revealed major problems in the nation’s coroner system: pathologists not certified in pathology, physicians who flunk their board exams, even coroners who are not physicians at all. “In nearly 1,600 counties across the country,” the investigation found, “elected or appointed coroners who may have no qualifications beyond a high-school degree have the final say on whether fatalities are homicides, suicides, accidents or the result of natural or undetermined causes.”
For his forthcoming book, Convicting the Innocent: Where Criminal Prosecutions Go Wrong (Harvard University Press, April 2011), University of Virginia law professor Brandon Garrett examined the trial transcripts and other legal documents of the first 250 people to be exonerated by DNA in this country. He discovered that in more than half these cases, trials were tainted by “invalid, unreliable, concealed, or erroneous forensic evidence.” The errors ranged from analysts making up statistics on the fly, implying that their methods were more scientific than they actually were, and exaggerating or distorting their findings to support the prosecution.
Peter Neufeld, a lawyer in New York and cofounder of the Innocence Project, which has helped to facilitate many of these exonerations, calls it the “elastic expert: no matter what you see, I can distort it so that it would be a match.”
This “elasticity” is possible because the tests are largely subjective. Just how much human judgment is required depends on the discipline: DNA testing is mostly—though not entirely—done by machine, for instance, whereas microscopic hair comparison is based solely on the analyst’s opinion. Even fingerprints, which many of us regard as foolproof tools for identifying culprits—think Dexter feeding a print into his computer and a bad guy’s photo and driver’s license appearing on the screen—in fact rely largely on human interpretation, and therefore are subject to human error.
One of the most famous examples of the danger of fingerprints was the case of Oregon lawyer Brandon Mayfield, arrested in 2004 in the wake of the Madrid train bombings. Working from a partial print that Spanish authorities had found on a plastic bag of detonators, several top FBI analysts declared Mayfield’s print a match. That is, until Spanish authorities identified Ouhnane Daoud, now wanted for terrorism in connection to the crime. When it became clear that Daoud’s prints were a much better match, the FBI was forced to admit that its own bias and “circular reasoning” had led them to Mayfield, who had no involvement in the bombings.
Part of the problem is what social scientists call “context bias.” Most forensics labs are located within police departments, so analysts may see themselves as working “for” the prosecution. They also usually have information about the evidence they’re testing—for example, that the suspect has a prior record. “There’s a lot of research to suggest that knowledge could have biasing effect,” says Jennifer Mnookin, a professor at the UCLA School of Law.
In a recent Supreme Court case, Justice Antonin Scalia, writing for the majority, said that whether consciously or not, an analyst “responding to a request from a law enforcement official may feel pressure—or have an incentive—to alter the evidence in a manner favorable to the prosecution.” The judges’ ruling means that forensic test results may be subject to the same kind of scrutiny as any other evidence, and an analyst from the lab that ran the test must be present in court to be cross-examined, just like any other witness.
“Obviously, most people in this community are trying to do their jobs well and are not trying to frame innocent people,” says the University of Virginia’s Garrett. “But what we’ve seen come out of these exoneration cases and in additional scandals at the laboratories is that this is not a problem of a few bad apples. Who is the competent analyst that can testify about a technique that’s fundamentally unreliable? That’s not a bad-apple problem. That’s a serious problem with our entire system.”
At the heart of these criticisms is the issue of what scientists call validity and reliability. A test is valid if its results are factually accurate. A test is reliable if multiple tests will lead to the same conclusion. Some forensics tests, like blood typing, are very reliable: no matter how many times your doctor draws your blood, you will always have the same blood type. Occasionally there are mistakes, of course, but they are predictable: blood-typing tests have well-documented and well-understood error rates. Others, like hair comparison, are unreliable: studies have shown that multiple technicians examining the same two hairs—even the same technician examining the same two hairs at different times—come to multiple conclusions. Critics say that many of forensic science’s most basic tools are neither reliable nor valid.
For example, at the trial of Jimmy Ray Bromgard, who served more than 14 years of a 40-year sentence for sexual intercourse without consent until he was exonerated in 2002, the director of the Montana State Crime Lab told the jury that hairs found on a blanket in the victim’s house “matched” hairs taken from Bromgard’s body. There were so many hairs that matched so well, the analyst said, that there was a “one in 10,000” chance the hairs could have come from anyone else.
But no one has ever established any statistics about the microscopic characteristics of hair, so “one in 10,000” odds isn’t based on scientific consensus. How common is it for a person to have a particular hair color, or for a hair to crinkle or curl just so? Scientists have never answered that question systematically. And what does “match” mean, anyway? There are no uniform guidelines to say how many characteristics two hairs must have in common before they’re said to “match.” It varies entirely from one lab to the next, from one technician to the next.