By Dr. Ken Broda-Bahm:
If I had to pick one trial reform that has the best chance of promoting reliable information in voir dire and in decreasing reliance on demographic biases, it would be the greater use of supplemental juror questionnaires. A well-designed questionnaire allows you to uncover the attitudes that are most relevant to bias in a given case context. Here are seven posts laying out the reasons why.
In a pending battle of the Tech-Titans, Oracle is seeking $1 billion in damages in a copyright case on application programming, and Google is claiming a fair use defense. With those household names facing off, one would think that the case would warrant some additional time and process for voir dire -- a comprehensive juror questionnaire, perhaps, looking into the potential jurors' experiences and attitudes relating to the technologies in general and the companies in particular? That is certainly what the two parties thought, but efforts to present a joint motion for a supplemental juror questionnaire were met by a swift "no" from U.S. District Court Judge William Alsup. As reported in a recent article in Ars Technica, the judge complained that the proposed questionnaire included "so many vague questions" that "the loser on our eventual verdict will seek, if history is any guide, to impeach the verdict by investigating the jury to find some 'lie' or omission during voir dire." The article included a link to the questionnaire itself, and based on judge’s reaction, I was prepared for a document that was long, detailed, onerous, intrusive, and unfair. Instead, what I found was a surprisingly simple, reasonable, and relevant questionnaire. There were no questions that could be considered violative of juror privacy, almost no questions on opinion at all, and no indications whatsoever of possible traps for later impeachment of the verdict. In short, the Oracle and Google teams seem to have done everything right. They kept it brief: The questionnaire is only two pages long. They kept it on point: All of the questions are either boilerplate demographics or focus on experience and knowledge in the relevant technology field. And they made it a joint request rather than fighting over different versions. (Read More)
For those of us in the social sciences, data is our currency. And of course, we would prefer that our window into attitudes and trends gives us an unbiased view. The reality, though, is that the "how you collected it" matters as much as the "what you collected." A simple survey, for example, can be administered by an interviewer (as in a telephone survey), or it can be self-administered by the respondent (as in an online survey). That latter "self-service" mode isn't new of course, since a paper survey is also self-administered. But as online survey approaches are swiftly moving to displace the traditional telephone survey, the question of influence exerted by the way the data is collected, the "mode effect," is becoming more important. To the extent that litigators rely on that kind of data -- for venue motions, community attitude surveys, focus groups, and mock trials -- then it's a question that matters to litigators as well. A new Pew Research Center investigation focuses on that difference between telephone and online data collection. Looking at 3,003 survey respondents who answered the same questions either by telephone (interviewer administered) or online (self-administered), Pew concluded that mode differences are "fairly common, but typically not large." The mean difference in the answers obtained via the two methods averaged 5.5 percent across a broad set of 60 questions. That difference is nothing to sneeze at, and it is worth noting that for some of the questions the mode effect difference ranged as high as 18 percent. That by itself is a big deal. But the more important finding is that the differences weren't random, but followed a pattern. While we are tempted to wonder which, telephone or online, is the true answer and which is skewed, there probably is no good answer to that question. (Read More).
Watching the Wizard of Oz recently with my three (and a half!)-year-old daughter, we came to the familiar scene of the fearless Toto interrupting the Wizard’s speech by pulling back the curtain on a man furiously working levers and wheels. When Dorothy and company ignore the instruction to “pay no attention to the man behind the curtain,” it becomes clear that what we see of the ‘wizard’ is an elaborate façade. There is a similar façade at work during jury selection, as well as every other personal interaction: we present a version of ourselves that comports with social expectations. To those who research human attitudes, this is known as “social desirability bias,” and it is probably the biggest barrier between you and honest answers in voir dire. Social desirability bias (Crowne & Marlowe, 1960) refers to the tendency to edit answers in the direction of what is seen as more desirable. When surveyed for example, people will tell you that they eat healthier, vote more often, and spend more quality time with their children (watching and discussing the Wizard of Oz, for example) than they actually do. And it isn’t necessarily a process of consciously lying. Instead, they are providing you with their own self-image which is colored by a curtain of selective perception. Success in using voir dire to uncover the real attitudes and experiences that could turn a juror against your case requires strategies for revealing the woman or man behind that curtain. One strategy for dealing with social desirability is to rely on jurors’ oath to tell the truth. But that isn't a very good strategy. If jurors don’t feel that they are lying, but are instead selectively recalling behavior or interpreting attitudes based on an idealized filter, then an oath or a promise to tell the truth won’t necessarily help. (Read More).
Crowds can be scary things. At a debate this past Monday (September 9th), Republican Presidential candidate, Ron Paul, was asked if his stance against government mandated health insurance would dictate denying care to a hypothetical man who found himself in a coma without the benefit of catastrophic health insurance. "Are you saying," Wolf Blitzer asked, "that society should just let him die?" In response, a chorus of voices from the audience shouted "yeah!" Less than a week earlier, in a similar Republican candidates debate, Texas Governor Rick Perry received his biggest applause of the night, including cheers, hoots, and whistles, when the moderator noted the Governor's two hundred and thirty four executions. It is a record for a modern governor, and there is compelling evidence that at least one of that number was innocent. But my point is about neither healthcare, nor the death penalty, but about what happens to opinions when they become the voice of the crowd rather than an individual's judgment. There is mounting psychological evidence: Collective judgments differ dramatically from individual judgments, and the "wisdom of the crowd" also has a dark side in the form of a herd mentality. Our system of jury trials - with strict rules of evidence and reliance only on information presented in the courtroom - is designed precisely to combat mob rule. As the recent trial of Casey Anthony, as well as the upcoming trial of Dr. Conrad Murray illustrate, maintaining that distance from the crowd's beliefs can be difficult. This post includes some research and advice, not only for high profile trials, but for any trial that has you running against something that the mob at large takes as truth. (Read More).
Whether we're reading the news, shopping, or participating in social media, we are swimming in "likes" these days. Electronic journalism, online retail, and sharing sites like LinkedIn or Facebook all give users an unprecedented ability to participate, broadcasting their preferences with a click of a button or a comment. But are we influenced by these strangers when we consume those views or products? Yes we are, according to a study (Muchnik, Aral, & Taylor, 2013) just out in the journal Science. There is a herding instinct that kicks in when we hear another's opinion. It is a powerful but, not entirely simple, phenomena and it influences how we should gather and assess opinions in a group context like oral voir dire. The study isn't yet available online, but I was able to track down a hard copy and it does get a healthy 'like' (and a summary) in a recent Eurekalert. Three researchers with backgrounds in business and management looked at this idea of social influence in the context of an unnamed news and discussion site that allows thumbs-up or thumbs-down votes on individual comments. In a five-month experiment, the researchers manipulated these votes to see how that affected positive or negative opinions of the views themselves. They found support for three conclusions: One, the herding effect is real and people are heavily influenced by positive opinions expressed online. Adding "likes" to a message resulted in a 25 percent higher average rating from other viewers. (Read More).
Put Your Jury Selection on Steroids by Leveraging Pretrial Research: Lessons from the Barry Bonds Trial
This post is focused on bulking-up your ability to target high-risk jurors and performance enhancing your voir dire. So speaking of steroids, let's start with Barry Bonds. Jury selection for the perjury trial of the former San Francisco Giants power-hitter, charged with lying to a grand jury over steroid use, starts this week. Prospective jurors will fill out a 19-page questionnaire focusing on the factors that both sides believe should help to reveal bias and guide the process of exercising cause and peremptory challenges. But how reliable is the information underlying these questions? A recent New York Times online article contains a curious contrast of opinions on the question of how tightly San Franciscans will cling to their opinions on Bonds. Howard Varinsky, a Jury consultant, famous for his work in high profile trials like Michael Jackson's, says "things have changed..." and a lot of people have "grown very ambivalent" on Bonds. Another consultant, Chris St. Hilaire, however says that opinions are likely to have remained very strong: "finding someone who doesn't have an opinion about Barry Bonds is like finding a cowboy who doesn't have an opinion about a horse." So who is right? According to some recent research (Druckman et al, 2010), the answer would generally depend on how the attitudes were formed in the first place. If they were formed in direct response to new information (called "online processing"), they would be more long-lasting, and if they were formed based on recalled information (called "memory processing") they would be less durable. But in the case of broad-based attitudes formed as a result of a drawn-out public saga, like the controversies in the run-up to the Bonds trial and most other media-saturated trials, potential jurors are going to have a mix of both attitudes. (Read More).
Get It in Writing: Seven Reasons for a Jury Questionnaire (And Three Things That Kill Its Usefulness)
A recent piece in The New York Times focuses on the increasing prevalence of longer questionnaires for those called in for jury duty. While such questionnaires tend to attract media attention in high-profile cases, the article notes that they've "become a familiar presence in courtrooms across the United States." The reaction from consultants and many litigators, though, is probably "...still not as familiar as we would like." While the article focuses on questions of often-limited utility (e.g., What TV shows do you watch?), the use of a questionnaire holds far greater potential. In nearly all cases, a focused questionnaire can yield data that makes your jury selection more targeted, accurate, and effective. Yet SJQs still aren't an expected tool for most judges. In our own surveys of judges, we have found the number one reason why judges don't use them: They're not asked to. So ask. And in building your case for a questionnaire, you'll have a much easier time of winning judicial approval when you make it 1) a joint request, and 2) for an appropriate and to-the-point questionnaire. But how you ask for that questionnaire still matters, and the logistics of administration matter as well. In this post, I will share seven reasons for a questionnaire, and also three factors that, when present, kill the usefulness of the questionnaire. (Read More).
Other Posts on Jury Selection:
- Consider the Small Chance that "Big Data" Might Pick Your Jury
- Don't Expect Cause Challenges to Do the Work of Peremptories
- Ask About News Sources in Voir Dire
Photo credits: 123rf.com, main image. Other image credits appear in each individual post.