Tag Archives: jury selection

March 25, 2013

Practice the Pivot in Oral Voir Dire (Part One): The Basic Model

By Dr. Ken Broda-Bahm: 

Jury-1
When voir dire goes well, it creates a balance between the goals of spotting the high-risk jurors and safely drawing themes from the more favorable jurors. At the same time, the questioning process should build rapport and feel natural to both the attorney and the panelists. When voir dire goes wrong, it is often in one of two ways. On the one hand, some attorneys will ask only tightly controlled questions (yes/no, or cross-examination style) that are designed to lead and minimize the risk that prospective jurors will taint the panel by sharing opinions and experiences that run counter to your messages in the case. On the other hand, some attorneys will simply ask jurors what they think on topics relating to the case, giving equal voice to those whose views would help and those whose views would harm your message. The first approach errs in learning too little: constraining the discovery of relevant information on those who will judge your case. But the second approach errs in potentially learning too much: creating the real risk that unfavorable jurors will not only be discovered, but given a soapbox as well.

The right balance is to identify the bad jurors, but talk to the good jurors. In other words, your questions should create a context in which less favorable jurors are comfortable admitting to a bias (often, by simply raising a hand to agree or disagree with a statement made by another panelist), while eliciting the greater balance of thematic statements from favorable jurors when they are in a safe and strike-proof majority. Even as you try to limit the soapbox opportunities offered to less favorable jurors, you also need to ensure that the panelists believe that you are genuinely interested in hearing all they have to say. 

In this post, the first of a three-part series, I will be introducing an approach to question structure that I call the “pivot.” The basic approach is to ask open-ended questions (so panelists don’t feel hemmed in or led), but then strategically pivoting off those answers in order to focus discussion in ways that reveal the higher- risk jurors while talking to the lower-risk jurors. In this post I’ll describe the basic model, in the next post I’ll discuss some adaptions to make when the conditions get tricky, and in the final post I’ll share a short video demonstrating the process. 

Like a good witness examination, your voir dire should be neither off-the-cuff, nor unalterably scripted. It is essential to have a plan, but equally essential to stay loose enough that you can react to what you’re learning and choose follow-up questions that aren’t in your notes. Unlike good witness examination, the goal is to genuinely discover what the target is thinking. That is where the pivot comes in: You need to control and focus discussion while still showing genuine interest in what they have to say. In exercising this kind of soft control, the fundamentals are to avoid overexposing favorable jurors, or overcommunicating with unfavorable jurors. 

To do that, my model breaks down into a series of steps. It helps to keep the broad structure in mind so you mentally know where you are in the process instead of just marching through a list of questions. Using a running example of voir dire in an employment defense, let’s look at the sequence you would follow within each major topic. 

1. The Warm Up

When taking a deposition or examining a witness in trial, you have a right to expect that the witness is ready to answer as of the first question. But you can’t expect that with your potential jurors. Because you are expecting them to not just answer questions, but feel comfortable enough to share their own attitudes and experiences, they need to be warmed up, or in psychological terms, “primed” to think about and share their feelings. As you introduce each topic, start with a softball that brings the issue to mind. You can pose the question to the whole group or you can select a potential juror that you haven’t heard from yet. 

Attorney: How many of you have had a job where someone else is deciding if you’re hired and fired — In other words, not the boss at the top, and not your own boss either? By a show of hands, who has or has had that job? Okay, that is pretty much everyone. Mr. M, what was the most recent job like that for you? Ms. J, how about you? 

 2. The Open-Ended

Once the topic has been introduced and panelists have drawn on their own experiences and thoughts on the subject, it is time to ask an open-ended question. The goal is to get a potential juror to express an opinion, but you don’t need to (and won’t have time to) ask each one individually. Initially, you can chose someone you haven’t heard from, or someone at random.  As you learn more about the likely opinions they hold, however, you can select panelists who are more likely to give a helpful response. If time permits, ask each member of the panel at least one open-ended question. If a potential juror is neutral or has no real opinion they can articulate, simply select another member of the panel and ask the same question.  

 Attorney: How many of you have heard of something called “at will” employment, or the idea that an employer or an employee can end the employment relationship at any time, with or without cause, just by giving notice? What do you think about that idea? Do you think it is fair or unfair? 

3. The Pivot

This step is obviously the key to the approach. Instead of asking each of the venire members in sequence what they think, the trick is to turn an individual answer into a group answer by pivoting off the first panelist’s response. It can be as simple as asking, “how many of you agree?” or “how many of you disagree?” and calling for a show of hands. Keep two questions in mind as you transition from the prior answer: One, “Does the response help or hurt my case?” and two, “Is the response likely to be a majority or a minority point of view?” I’ll cover this more fully in part two of this series, but there will be some situations where you’ll want to recast the question a bit as you pivot in order to reduce the chances of putting a spotlight on potential jurors that are favorable to you, but in the minority on the panel (as doing that just helps the other side identify their strikes). Ultimately, you want to pivot with the goal of dividing the group and having a safe and unstrikeable majority on your side of the question. For now, let’s focus on the simplest illustration in response to the open-ended question above. 

Mr. A:  “At will” employment is just the reality these days, it’s inevitable.

Attorney: Why do you think so, Mr. A?

Mr. A: Companies need to be flexible, so sometimes they just need to reduce their work force, and sometimes it just isn’t working out.

Attorney: Thank you, so you think “at will” employment is something companies need? By a show of hands, who disagrees with Mr. A? And who agrees

Note that in response to that question, most groups will divide themselves so that a majority ends up supporting “at will” employment, at least in theory. If it breaks that way, you can take note of who the worse potential jurors are (those who disagreed with Mr. A), while also noting the better potential jurors within the safe majority (those who agree). 

4. The Low Risk Follow-up 

There is always some risk when you follow up – since you never truly know what a potential juror will say. But after the group has been divided based on a question like the one above, it is decidedly more predictable to follow up with the group that is likely to be lower risk and more favorable to your side of the case. Remember the two questions to keep in mind when you hear the open-ended response: Is it helpful or harmful to my side, and is it likely to be a majority or a minority point of view? Your expected answers to those questions will guide how you follow up. When a potential juror responds with a helpful majority opinion, you will want to amplify that response by spreading the theme to others on the panel, asking for more on that opinion from the same jurors and others. When jurors respond with a harmful minority opinion, flip the statement by asking about the opposite opinion.  

Attorney: Ms. S, you raised your hand indicating that you agree that “at will” employment is necessary. Why do you think so? Tell me more about that. 

Ms. S: Its never nice to let someone go, but the business has its first obligation to the customer. 

Attorney: Thank you. Mr. D, you also agreed with Mr. A. Can you think of any examples where a business would need that kind of flexibility? 

Mr. D: Sure, a company may need to close a location or they may need to address a productivity problem. Bottom line, they need to build the best team they can, and that’s their right. 

Attorney: Thank you. 

5. The Wrap

As a final step before you leave a topic, it is often necessary to wrap things up by getting a commitment, correcting any misimpressions that your question may have left, or countering any bad messages that may have come out of juror comments. Note that the wrap up can be one of the few times where you will want to ask a leading question. And in that case, you lead for the same reason you lead in cross: because your goal is more focused on making a point than on gaining information.

Attorney: Thank you all. So knowing that there are different views on whether “at will” employment is fair, does anyone doubt that in many cases, it is perfectly legal?

Following that series of questions during a typical employment defense voir dire can be expected to fulfill all of the goals. You build rapport by demonstrating interest in what the panelists think and by asking open-ended questions. You learn about the higher risk jurors by pivoting off a response in order to divide the group. And you allow panelists to reinforce your case’s positive themes by following up with the lower-risk members of the venire. Of course, in the real world the questioning is not always that clean. I’ll follow up with a discussion of some of the difficulties in part two of this series and provide a video of a demonstration in part three. 

There are naturally many approaches to oral voir dire, and the pivot model is just one approach.  Whatever approach you choose, your goals in conducting oral voir dire should be to build rapport, learn about high-risk attitudes, and get jurors talking in ways that reinforce your themes. 

____________________

The Series: 

____________________

Other Posts on Oral Voir Dire: 

____________________

Image Credit: Jason Bullinger, Persuasion Strategies

August 9, 2012

The Products Survey (Part I): Adapt to Today’s Product Attitudes

We are in an election season, and that is a good reminder of the fact that attitudes change. Maybe not fast enough to feed the 24-hour news cycle, but definitely fast enough to influence the litigation climate between cases. Products liability litigation in particular, is heavily influenced by jurors’ preexisting attitudes on personal responsibility, their specific beliefs about safety, product labeling and testing, as well as the way they see the relationship between large corporations and individuals. These are all attitudes that vary by venue and over time. Not having your finger on the pulse of these shifting opinions can pose a danger to products litigants. While jurors are definitely committed to hearing the evidence and basing a decision on the particular case instead of their generalized attitude, the outlook they come in the door with will still determine your starting point in trial.

By Dr. Ken Broda-Bahm

1423681302_e22bc218b6_b

A Persuasion Strategies/K&B National Research Survey

We are in an election season, and that is a good reminder of the fact that attitudes change. Maybe not fast enough to feed the 24-hour news cycle, but definitely fast enough to influence the litigation climate between cases. Products liability litigation in particular, is heavily influenced by jurors’ preexisting attitudes on personal responsibility, their specific beliefs about safety, product labeling and testing, as well as the way they see the relationship between large corporations and individuals. These are all attitudes that vary by venue and over time. Not having your finger on the pulse of these shifting opinions can pose a danger to products litigants. While jurors are definitely committed to hearing the evidence and basing a decision on the particular case instead of their generalized attitude, the outlook they come in the door with will still determine your starting point in trial.   

This post is the first of three focusing on our own original research. Persuasion Strategies worked with the recruiting and survey company K&B National Research to conduct a nationwide telephone study of 406 jury-eligible participants to measure attitudes on a number of topics relating to products liability defense litigation. In June, 2012, we asked participants about their views on product testing, labeling, and legal responsibility. We also asked survey participants to report their leanings on a number of brief litigation scenarios. Part II of the series will focus on a few emergent factors characterizing those jurors who pose the greatest risk to the product manufacturer or seller, and Part III will focus on the special role of anti-corporate bias in mediating the relationship between individuals and companies in products cases. Before getting into that, however, this first post provides an overview of the survey results, as well as the general takeaways for products defendants preparing messages for trial. 

One important note is that for all of the findings below, we are measuring reported attitudes not behaviors. That is, someone can say they read product labels all the time, but that might be saying more about social desirability bias than it says about their label reading practices – in actuality they might skim or ignore labels like the rest of us. But that doesn’t make the expressed attitude unimportant. Even if it is biased in the direction of desirability, the attitude can be critical to the extent that it helps frame the expectations that jurors bring into the courtroom and apply when evaluating the parties. 

Among the general conclusions of this survey, the following are the most notable: 

How Well Do We Test?

Survey Finding: There is nearly an even split on adequate testing of products. In our survey, 47 percent say that products tend to be “often,” or “almost always” adequately tested, while 48 percent say that they are “rarely,” or “almost never” adequately tested.

Trial Strategy Recommendation: Measure attitudes on testing either in a supplemental juror questionnaire or in oral voir dire. Even when your case doesn’t involve a direct controversy over the level of testing, a panelist’s opinion about product testing can be a window into their views on how much responsibility the company should have, with those supporting greater testing also being more likely to hold the product manufacturer responsible. 

Do We Follow Warnings?

Survey Finding: About two-thirds feel that consumers follow safety precautions. In our survey, 65 percent reported that consumers “often” or “always” follow recommended safety precautions when using consumer products.

Trial Strategy Recommendation: This possibly exaggerated view of how often consumers follow precautions can be beneficial. When defending a product against a plaintiff who may not have fully followed precautions, normalize the experience of reading and following warnings. If nearly everyone does it, then the plaintiff is in an exceptional class of those who don’t. The more unusual or atypical the plaintiff’s behavior, the easier it is to attribute responsibility. 

 Do We Prioritize Safety or Performance?

Survey Finding: Companies and consumers are both seen as prioritizing product performance over product safety. In our survey, 59 percent reported typical manufacturers prioritize product performance, compared to 36 percent who say they prioritize product safety. A comparable result applies to consumers, as 61 percent say consumers prioritize performance compared to 36 percent say they prioritize safety. 

Trial Strategy Recommendation: In addition to directly finding out who believes that companies underemphasize safety, it is also a good idea to undercut that dichotomy. If making the product “better” is the same as making the product “safer,” then the incentives run in the right direction and jurors have a good reason to believe the company made the product as safe as possible, not because they are good citizens, but because they have a profit motive to do so.  

Do We Read Labels? 

Survey Finding: Fewer people claim to read labels now than in 2010. Two years ago, fully 57 percent claimed to “read the label word for word,” and today that percentage is down to 37 percent. It may be the effect of the profusion of “terms” that we all have to agree to these days whenever we open a new program or install a new app — just clicking “I agree” and ignoring the terms is probably the default for many to most of us. There could also be a political source for this difference, as we also found that those who generally vote Republican are also more likely to admit to “skimming” or ignoring product labels, perhaps being more comfortable in a “personal responsibility” mode. 

Label Reading
(click to see full-sized image)

Trial Strategy Recommendation: Take jurors’ statements about their own label reading with a grain of salt, but understand that they’ll still be willing to apply that idealized view when they evaluate others. Some will say, “Well I would have read that label…” and others will say, “I may not have read it, but if I didn’t I wouldn’t be suing,” but all will to some extent use themselves as the standard in evaluating others, the “Golden Rule” notwithstanding. 

Do We Trust Large Companies?

 Survey Finding: We have written extensively on anti-corporate bias, based on a decade of our own research on the concept. Tracking the attitudes year-to-year has allowed us to recognize changes when they occur. In our 2012 survey, for example, we noticed a lessening of anti-corporate bias in a few areas. For example, 61 percent of our current respondents believe the government favors large corporations over ordinary Americans, compared to 74 percent last year. Just 29 percent reported that there are “too few” lawsuits against large corporations, compared to 47 percent in 2011. This somewhat better picture on anti-corporate attitudes may be a sign that, at least for part of the population, the campaign rhetoric recasting large companies as “job creators” instead of evil monoliths is gaining some traction and mitigating some of the bias. 

Trial Strategy Recommendation: Measure your potential juror’s anti-corporate bias. We have developed a scale that does exactly that and made it available for free (you can download it here). That scale, particularly when supplemented by the information that will be in Part III of this series, will be useful in identifying the jurors that are likely to be hardest on a products defendant. Apart from jury selection, however, it is also important that you adapt your message in trial. Given that a clear majority still distrusts large companies in general, some of those people will be on your jury. For that audience, you need to convey the face of the company and show the concrete ways that it differs from their view of a “typical” corporation. 

Even with the moderate changes that we’ve seen, today’s attitudes are still generally pro-safety, pro-testing, anti-big business, but also pro-personal responsibility. A consistent two-thirds for example (66 percent in 2012 and 68 percent in 2010) believe that when individuals are injured using typical consumer products, it is probably the individual’s fault. Discovering the unique mix of these attitudes in your venue and adopting to the ways they’ll influence jurors’ view of your case is the central challenge. 

____________________

Other Posts on Product Defense: 

____________________

Cite Research to Persuasion Strategies (2012). National Juror Survey: Products. 

Photo credit: Ejimford, Flickr Creative Commons

 

April 12, 2012

Don’t Select Your Jury Based on Demographics: A Skeptical Look at JuryQuest

By Dr. Ken Broda-Bahm: 

4496111748_7cb8ef8793_o

While researching for a previous post, I was reading Professor Dru Stevenson’s (2012) article in the George Mason Law Review, and I came across a jarring sentence asserting that “modern approaches to jury selection” focus on biases relating to factors “such as race and gender.” The author then followed up in a footnote:  “Most indicative of this consensus is the widespread use of JuryQuest software and similar products which compute values for each prospective juror based on such factors.” Never heard of JuryQuest? If not, you are forgiven. In sixteen years of working with consultants and attorneys providing assistance on jury selection, I haven’t come across anyone using it. Nothing in either Stevenson’s article, or the article he cites in the footnote (Gadwood, 2008), supports the claim that use of JuryQuest is “widespread.” Instead, then law student now attorney James Gadwood, notes the software’s use in the Andrea Yates trial, as well as the handful of client firms – geographically distributed, but not terribly numerous – that are listed on the software maker’s website.  

Beyond the JuryQuest software, the central mystery in both articles is the belief that demographics occupy a central place in modern jury selection. That is like claiming that the accordion occupies the central place in modern music – it is very nearly the opposite of the truth. For at least three decades, researchers have known that demographic factors like race, gender, age, and education are very weak predictors of verdicts, and those who make their living assisting in jury selection have focused instead on learning about the experiences and attitudes that bear upon the issues closest to the case. Still, the perception persists that courtrooms are teaming with consultants and attorneys who are applying the questionable science of demographic prediction, fueled by commentary like Gadwood’s and Stevenson’s, and perhaps by the continued existence of products like JuryQuest. This post aims to correct that misimpression, and to emphasize the factors that truly matter when selecting your panel. 

No, Modern Jury Selection is Not Dominated by “JuryQuest” Demographics

The central point that James Gadwood makes about JuryQuest is well-founded:  the computer program is a walking Batson challenge. By formally incorporating and weighting seven demographic facts about your potential jurors (race, gender, age, education, occupation, marital status, and prior jury service), the program formally relies on the very criteria that Batson, and its related cases like J.E.B. v. Alabama, mark as the sign of an impermissible strike. Gadwood does a very effective job of unpacking the standard and applying it to the use of a program like JuryQuest, and his analysis concludes by urging courts to adopt means to expose the use of “a jury selection tool which unabashedly operates, at least in part, on the basis of constitutionally impermissible characteristics, directly contravening Batson and its progeny” (p. 318-19).  

But where Gadwood errs is in overstating the field’s reliance on this tool. He quotes the attorney who successfully won an insanity defense for Andrea Yates (“You can’t overemphasize the importance of the software…”), but his only support for the claim that use is “spread across the legal industry” is a reference to JuryQuest’s own client list on the company’s website.  As of today, that list includes six civil firms (four in Texas, two in California), twenty-three criminal firms or lawyers, and eight public defender offices. Consultants who work more in the criminal arena (I do, but it is chiefly white collar) may see more use of a tool like this, but I would be somewhat surprised if they did. There is no discussion of the tool that I’m aware of within trial consulting circles, and I have trouble believing that experienced attorneys would set aside their own judgment and evaluation of what they learn from specific venire members and instead rely on the software’s demographic formula.  

No, There is No Case for Relying on Demographics

Nor should attorneys set aside their judgment and evaluation in order to place their faith in demographics. Even if there were no Batson and no progeny, there is simply no social science case to be made for the reliability of demographics as a predictor of juror bias. On this point, the consensus of litigation consultants is clear, and the research backs it up.  To choose just a couple of examples, Fulero & Penrod (1990) reviewed the approach of tying demographics to verdicts and found that demographic variables are at best only modest predictors of verdicts. More recently, Joel Lieberman (2011) notes that the asserted relationship is “still murky after 30 years.” While broadly condemning the label of “scientific jury selection,” what this research is really critiquing is a reliance on demographics, which social scientists in the business have largely given up.  

A critical read of the JuryQuest website points out some of the reasons why. The company’s database uses the seven identified questions alone to rank jurors on a 100 point scale as favoring or opposing your case. This ranking, however, is not based on published research, but based on JuryQuest’s own proprietary database of the questionnaire responses of “nearly 45,000” individuals. The tool’s published success rates, much higher than average for both civil and criminal clients, are also based on the company’s own analysis. The website quotes research, but not anything that supports the predictiveness of its demographic criteria. Instead, oddly, the website emphasizes the Kalven & Zeisel 1966 finding that jury verdicts tend to be consistent with that jury’s first vote.  

The unanswered question is whether jury verdicts or first votes are predictable through demographics alone. Or more pointedly, why would just seven demographic variables fare better at predicting bias than a more specific voir dire on your own case?  “Since strongly felt values of individuals are reinforced in group settings (deliberations),” the website explains, “It is important to obtain systematic evidence on values by social groupings, rather than relying on attempts to infer values from questioning individuals in voir dire.” Unpack that statement and it quickly falls to nonsense because jurors tend to reinforce each other in deliberations. You should trust the fact that a juror’s demographic group (e.g., caucasian) holds a given value (e.g., a law and order mentality) more than you trust what you learn from the specific individual juror in voir dire?  Why?  

While there is good questioning and bad questioning, as we’ve written before, there is no reason to trust what is true in the aggregate more than you trust what is true in the individual. So one demographic group may be more likely to hold a specific value, but nearly always that difference, even when statistically significant, is likely to be relatively minor and to explain far less than the majority of variability you see in the attitude. In other words, you will find about as many individuals who conflict with the stereotype as confirm it. So even when dealing with a real demographic difference, you are better off finding out what the individual thinks. After all, it is the individual and not the demographic group who will be sitting in judgment on your case.  

No, Demographics Are Not Even a Good “Starting Point”

In some of the early press that this program received, criticisms like the ones I’ve leveled are often answered the way attorney Jason Webster did in a 2006 National Law Journal article:  “It’ll never replace asking the juror questions, but it will give you a good place to start.” That starting place, however, might be a Batson challenge that just requires the judge to be shown the JuryQuest website. However, even if there was no Batson, and even if I stipulated that the demographic correlations are genuine, I’d still argue that demographics don’t provide “a good place to start” for a couple of reasons.  

1. Demographics create a false sense of specific knowledge. As I say, even where a correlation is real, it is likely to be minor. That means that close to half the time, what is true in the aggregate won’t be true in the individual. But when you start with a demographic conclusion, you might feel like you have knowledge about the individual’s attitudes and values that bear on your case, but in reality, you don’t.  

2. Demographic reliance may crowd out better sources of information. We know that humans are prone to selective perception and tend to notice what confirms our expectations more than we notice what refutes it. In that way, a demographic expectation can be a self-fulfilling prophecy. Or worse, attorneys used to thinking that demographics provide the answer may make less effective use of voir dire, or may be less aggressive in pressing for expanded voir dire.  

In truth, the biggest culprit in promoting reliance on demographics is probably not software makers or consultants. It is courts that permit no substantive questionnaires, and either no or severely restricted oral voir dire — limits that induce attorneys to rely on what they can see only, and to believe that it means something. My own view is that when you don’t have good information, you are better off basing your strikes on the most traditional factors:  case and party knowledge, involvement in similar cases, and the other concerns that might emerge during the judge’s questioning without quite rising to the level of a cause challenge. When all else fails, you can make some reasonable judgments based on occupation, because that at least ties in to a potential juror’s daily life experience. Occupation is one of the seven factors considered by JuryQuest, but so far at least, it is a factor free from Batson-related concerns.  

Yes, Attitudes Are Still Your Best Cues to Juror Bias

The bias that jurors bring into the courtroom with them usually takes the form of attitudes.  In many cases, those attitudes are fostered by important life experiences.  And in some cases, those experiences can be related to demographics. But in all cases, it is the resulting attitudes that are doing the work. For that reason, the best voir dire strategy should focus on uncovering attitudes.  

While some in the popular press, and even scholars have equivocated between “Scientific Jury Selection” and “Demographic Jury Selection,” there are definitely systematic and scientific ways to focus on the attitudes that matter most in jury selection, as we’ve written in the past. Granted, much of the useful advice that consultants supply during jury selection will be subjective in nature and will supplement the attorney’s own subjective interpretations. But where quantitative social science techniques apply, they should apply first and foremost to the attitudes that will drive juror decision making. One example of Scientific Jury Selection that is not based on demographics can be found in our own Anti-Corporate Bias Scale, a validated measure of the attitudes that determine initial leaning in an individual versus corporation case. It is still science, just based on a much better foundation than demographics.  

Epilogue:  Don’t Hate the Technology

One of my biggest irritations with a program like JuryQuest, and the attention it has received from scholars like Stevenson and Gadwood, is that it gives a bad name to technology used in jury selection. Skepticism of the ability of “a computer” to pick your jury, for example, might unfairly tarnish many of the more modern approaches to laptop or iPad aided jury selection. To be clear, the tools that I’ve used and reviewed — Jury Box, iJuror, and Jury Duty — do not make a similar demographic calculation to tell you who is at risk. Instead, these tools serve as sophisticated versions of the old Post-it note grid in order to systematize the choice for the attorney. The best among them allow the user to apply a weight to the information learned, and to calculate a resulting score for the prospective juror, but importantly, that score is based on the users’ own sense of what matters to their own case.

____________________

Other Posts on Jury Selection:

____________________

ResearchBlogging.org Gadwood, James R. (2008). The Framework Comes Crumbling Down: Juryquest in a Batson World Boston University Law Review, 88, 291-319

Dru Stevenson (2012). The Function of Uncertainty Within Jury Systems George Mason Law Review, 19 (2), 513-548

 

Image Source:  spotreporting, Flickr Creative Commons, 2010 U.S. Census Form

   

March 1, 2012

Don’t Mistake the Purpose of “Scientific Jury Selection”

Dr. Ken Broda-Bahm: 

Weird Science
The word “science” conjures up all kinds of images, and many of those images don’t quite match the realities.  One context in which scientific perceptions are at a mismatch with reality is the area of jury selection.  A week ago, Joel Warner wrote an article for Slate, the online magazine, that began with the question, “Can I use science to get out of jury duty?”  Casting a skeptical glance at the notion of scientific jury selection, Warner then broadened his critique to the jury consulting profession as a whole:  “Since even the practitioners of scientific jury selection are reluctant to emphasize the science of what they do, some folks think it is time to get rid of the business altogether.”  Being one of those folks, Warner then suggested eliminating the peremptory challenge as a way to reduce the incentive for dealing with jury selection experts.  

The suspicion illustrated in the Slate piece, and amplified in its comments, is that our legal system has been hijacked by a dubious form of science.  The article, however, is founded on a number of significant misconceptions about both the purposes and the methods that are applied when a consultant is involved in jury selection.  Because some attorneys, particularly those who have never used a consultant, might have the same misconceptions, I wanted to take a closer look at exactly what a communication or psychology expert does in court, and what we mean and don’t mean by “scientific jury selection.”  

I typically avoid the phrase “scientific jury selection,” not for Warner’s attributed reason of being reluctant to emphasize the science of what I do, but because I know that the phrase is often subject to caricature and misunderstanding.  In practice, the activities of someone in my line of work vary dramatically from the Grisham-esque Hollywood image referenced in Slate’s title and boil down to more prosaic activities of profile, analysis, and recommendations.  The profile is a carefully constructed list of attributes — some demographics and experiences, but mostly attitudes – that general research and case-specific mock trials and focus groups tell us are likely to identify a potential juror that is a higher risk for our side.  The analysis is a careful tracking and weighting of everything we learn from potential jurors:  the attitudes and other information that they share in surveys and oral questioning.  The recommendations then apply that information to an attorney’s decision to challenge a potential juror for cause or to exercise a strike.  

I’ve written in greater detail about these activities in prior posts, but the important reminder is that none of these are radical departures from the traditional trial process, but are instead just ways of helping the attorney do what the attorney is supposed to be doing already — namely targeting and eliminating bias so their client gets a fair shake from the jury.  

To be more specific, there are a number of misconceptions in the Slate essay. 

Misperception One:  Scientific Jury Selection is Hard Science.   When average people think of science, they may think of test tubes, precise measures, and solid maxims and proof.  For example, when CSI matches a blood sample, then it is a match!  The Slate article seems to be relying on those perceptions in making comments like “the jury box turns out to be a lousy laboratory for the study of human behavior.”  However, the popular image, as well as the “laboratory” language is drawn from the physical sciences, or hard sciences.  Jury selection, on the other hand, swings from another branch of the scientific tree.  

Reality:  Scientific Jury Selection is Social Science.  The techniques applied are still “scientific” in the sense that they are methodical and replicable, but prone to the subjectivities of human interpretation and judgment.  That is not a limitation or a reason to see social science as “soft,” but is instead a realistic concession to the fact that we are dealing with individual attitudes and group dynamics.  For example, Mr. Warner argues that “to truly understand how group dynamics play out leading up to a verdict, researchers would need access to jury deliberations, and that’s strictly off-limits in real trials.”  Yes, but that is precisely the reason why mock trials and the close observation of those deliberations play such a central role in developing recommendations for trial preparation.  

Misperception Two:  Scientific Jury Selection Aims to Determine Trial Results.  The article refers to industry critic Neil Kressel in order to argue that, “The preponderance of academic researchers agree that it is extremely difficult to figure out how a jury is going to decide.”  Yes, but so do a preponderance of litigation consultants.  The implication from Warner is that since you can’t know with certainty how a jury will decide, an analytic approach to jury selection is worthless.  But that presumes that the goal of jury selection is to decide the case.  Put simply, it isn’t.  

Reality:  Scientific Jury Selection Aims to Reduce Bias.  The entire reason that the courts allow a voir dire process in the first place is to reduce bias and promote an environment where the facts win out.  As Warner notes, lawyers and consultants don’t get to “pick” juries, they get to “unpick” them by exercising challenges and strikes.  That means that there is no opportunity to “stack” juries, but instead only an opportunity to “unstack” them by eliminating those who pose the greatest risk of bringing a bias to the decision.  The article notes, “there’s something disconcerting about an expert being able to calculate how they’re going to decide a case based on their gender, background, and other characteristics,” but we aren’t calculating how they’ll decide, we are estimating the risk of bias. And it isn’t generally based on gender or background, but on expressed attitudes about issues that bear on the case.  Psychologists have known for many decades that, even in the social sciences, it is possible to measure bias in reliable ways, and I’ve written in the past on ways consultants apply this approach as well.   

Misperception Three:  Scientific Jury Selection Pollutes the Goals and Ethics of Trial.  Central to Slate’s critique, and most critiques of jury consulting, is the idea that it is somehow a foreign toxin that corrupts the pure ideals of justice.  As another critic, law professor Franklin Strier, notes, “It’s either expensive or a waste of time if its ineffective, or if it is effective, then it is unfair.”  The claim of unfairness assumes that jury selection assistance introduces an extra legal element into the process that skews the results.  But if what consultants are actually doing is more effectively eliminating jurors with the most evident biases, then it is hard to see where the unfairness is.  

Reality:  Scientific Jury Selection Encourages a Focus on the Legally Appropriate Factors of Bias.  Think about how jury selection occurred before the invasion of the social scientists:  Attorneys would often rely on what they could most easily see — race, gender, age, education — factors that we now know generally bear little reliable relationship to bias.  By involving someone who actually works with and measures attitudes for a living, attorneys are taking a step toward focusing more effectively on actual bias and not stereotype.  And that is exactly what the legal system is supposed to be focusing on all along.  

Of course, the ultimate irony is that all of this criticism comes in the context of an article on how to get out of jury duty.  The questions on the ethics of trial consulting are coming from someone who uses the platform of a national publication in order to coach perjury.  Mr. Warner’s advice when you’re under oath is to simply “be biased” and “say you can’t be fair and impartial,” and then hold your ground when the judge questions you.  Effective, maybe.  Honest and helpful to the process, no.  Litigation consultants, on the other hand, do tend to be honest about both the benefits and limitations of their methods, and helpful to the process as that process is designed to operate.  Warner makes a fair point when he calls trial consultants “unregulated and certification-devoid.”  Many of us have long argued for professional certification and we are getting there.  But in the meantime, the American Society of Trial Consultants (mentioned by Warner) has a professional code including standards and practice guidelines for jury selection (not mentioned by Warner) that bear on many of the problems that he and other critics see.  Conducted properly and conveyed honestly, the involvement of a social scientist improves the process and keeps it focused where the law says it ought to be focused:  the reduction of bias.  

____________________

With this post, Persuasive Litigator says goodbye to its researcher, Erik Brown, who has gone on to seek other forturnes after playing an important role in the branding and the increasing visibility of this blog.  

____________________

Other Posts on Trial Consulting Practice:

____________________

Warner, Joel (Feb. 22, 2012).  Runaway Juror:  Can I Use Science to Get Out of Jury Duty?  Slate.  URL:  http://www.slate.com/articles/health_and_science/science/2012/02/the_science_of_getting_out_of_jury_duty_.html

Image:  www.cloudninecomics.com


February 6, 2012

Help Jurors Stay Off the Bandwagon

By Dr. Ken Broda-Bahm:

Bandwagon_Beige Alert
Sometimes the bandwagon isn't a bad place to be.  If your case embraces what is likely to be the popular position (e.g., the big company is to blame or the accused is guilty), then a tidal wave of opinion reaching a swift conclusion, and sweeping the doubters along in its wake, might seem like a pretty good thing.  But lawyers often find themselves on the opposite side of that dynamic.  In the toughest cases, your preferred verdict will rely on jurors exercising their own independent judgment long enough to consider the unpopular view and avoid being swept up into the bandwagon.    

In that setting, does the attorney simply hope that a critical mass of jurors will have the fortitude to resist the crowd effect?  Or are there techniques that a legal persuader can apply in order to bolster arguments against the bandwagon?  A new research article, interestingly a piece focusing on jury size and the likelihood of hung juries, has something so say on the point.  Starting with the fact that smaller juries are as likely to produce hung juries as larger juries, the analysis suggests that the reason for this paradox can be found in what it calls "informational cascade," or the bandwagon dynamics of the group.  This post looks at the research and provides some advice for those times when you want to combat the cascade. 

Jury Decision making and "Informational Cascade"

In 1970, the Supreme Court held that juries with as few as six jurors could satisfy the due process requirements of the Sixth and Fourteenth Amendments.  Part of the rationale for allowing the reduction had to do with decreasing the chances of a hung jury and a mistrial.  In the decades that have followed, however, there has been no broad-based reduction in mistrials.  Recently, two economists (Luppi & Parisi, 2012) tried to figure out why using the tool of decision theory.  While the article is based on some complex formulas, the gist of the analysis is that smaller juries are more likely to be homogeneous, and homogeneous juries are more likely to fall victim to the informational cascade that occurs when individuals "make a decision based on the observation of others, disregarding or discounting their own private information" – or a bandwagon effect.  The bandwagon can of course lead to a unified verdict, but it can also polarize a jury making it less likely to compromise.  Thus, they conclude that the reason that smaller juries aren't more likely to come to a consensus is because the jurors aren't exercising truly independent judgment, and the greater risk of a cascade effect tends to overwhelm any advantage of a smaller group.  The key is not the numbers, it is the diversity of the group.  In a heterogeneous jury, individuals are less likely to feel peer pressure to emulate each other, and hence more likely to exercise independent judgment and come to better conclusions. 

While Luppi and Parisi rely on mathematical decision theory, a similar conclusion is bourn out in observational studies.  For example, we've previously reported on a decision making study (Lorenz et al, 2011) showing that while a group's aggregate answers, when estimating a particular unknown fact, tended be be surprisingly accurate once individuals were given access to the estimates made by others, they started to imitate each other and became less accurate, yet more confident in their answers. 

The result of both analyses underscore the fact that even in a context like deliberation that prizes collective decision making, it is important to preserve a role for individual judgment.  There are a few ways to do that in trial preparation and in court.     

1.  In Mock Trials and Focus Groups, Always Measure Individual Before Group Response

I was recently asked by a client to review the results of a mock trial conducted by another group (yes, lawyers sometimes ask for second opinions).  It was an intense and well-organized two-day project.  But what struck me is that between attorney presentations, the mock jurors were asked just one question, "What is your current leaning?" and they circled a number without giving an explanation.  That technique may work to maximize attorney presentation time, but at the high cost of sacrificing what could be learned of jurors' individual opinions prior to group interaction and deliberations.  Having mock jurors complete a thorough questionnaire after each segment of the project is important not just for what you learn at the time, but also for the way it encourages the mock jurors to develop independent attitudes.  One feature of attitudes is that we are often not fully aware we have one until someone asks us what we think and why.  If the first time we are asked is in a group of others responding to the same question, then a bandwagon response is likely.  We've seen it in focus groups:  If you ask the same question of the group, then go around the table collecting the answers, once you've heard from half the mock jurors, the second half are generally giving you some version of "I agree with what Jim said…"   While the real trial won't have questionnaires after every presentation, that is a very desirable feature in mock trials.  You are not only collecting data, but also making your mock jurors more resistent to informational cascade.  This means that you will see more robust deliberations which is after all the point of the exercise.

2.  In Trial, Voir Dire With an Eye Toward Heterogeneity of the Panel

When selecting a jury, it is typical to think of individual panelists and the way their presence would help or harm your client.  While that should be the primary focus, we also advise attorneys to give some thought to the character of the selected group as a whole.  Based on studies of group dynamics, one of the most important group characteristics to keep in mind is heterogeneity.  Will your jury be composed of individuals who are basically the same on demographic and attitudinal factors, or will there be a healthy level of diversity in the group?   "Homogeneity can be breeding grounds for unjustified extremism, even fanaticism,"  notes Cass Sunstein, a legal scholar well-acquainted with the research.  "To work well, deliberating groups should be appropriately heterogeneous and should contain a plurality of articulate people with reasonable views."  That is admittedly harder to achieve in some venues than in others.  The more common problem, though, is that attorneys are comfortable with a particular type, and not surprisingly, that is often people like them.  If what the case needs is robust deliberation to weaken a bandwagon effect, then we should exercise strikes with an eye toward the resulting diversity of the panel. 

3.  Foster Independent Choice Through Your Message

Apart from the ways to research and to select jurors, are there ways to adapt your trial message in order to promote independent judgment?  I believe that when you need to reduce the risk of a bandwagon verdict, it is always worth it to try.  Beyond the typical request to "keep an open mind," it is also effective to weave in the following messages, starting with jury selection, and continuing through openings, evidence, and closing arguments:

  • We expect this case to be difficult.
  • We expect that it will be a challenge to understand some parts of the testimony.
  • We expect that you'll have some difficulty reconciling your views once you've heard all the evidence. 
  • We ask you to take extra care to hear out each person on the jury.   

I understand that zealous advocates generally want to say that it will be easy to find for their clients.  However, in the more difficult cases, you don't want the easy decision because the easy decision will be against your client.  In addition to setting jurors' expectations for more robust and challenging deliberations, it will also help to support juror note taking if allowed in your venue, and to encourage that behavior by using a flip chart and writing on it frequently.  When you or a witness writes, the jurors will often write as well.  In that way, you are directly composing the notes that could serve as an important aid to a juror in the minority. 

So, if you are sure you have the easier side of the argument, then you probably don't care about a bandwagon effect (and probably haven't read to the end of this article).  If, however, you're aiming to reduce the chances of an easy, popular, and unfavorable decision, then you should be giving some thought to what would prevent your jury from deciding too quickly. 

____________________

Other Posts on Group Dynamics:

____________________

ResearchBlogging.org Luppi, B., & Parisi, F. (2012). Jury Size and the Hung-Jury Paradox SSRN Electronic JournalDOI: 10.2139/ssrn.1980387

 

 

Photo Credit:  Beige Alert, Flickr Creative Commons

 

January 23, 2012

Don’t Be Entranced By Statistical Claims From Mock Trial Research

By Dr. Ken Broda-Bahm:

Alpha hypnotize
At a conference, I once met another consultant who actually claimed that his mock trial findings would line up with actual trial results with a definite confidence interval of, he said, plus or minus five percent.  It was one of those conference moments when you realize that you urgently need to talk to someone else, because to anyone with even a rudimentary understanding of research methods, the claim was absurd on its face.  A mock trial will not predict your actual trial results for a number of reasons.  But beyond that, there are important limitations on the ability to apply statistical tools to the results of mock trial research. 

This is a critical point for litigators, as mock trial consumers, to understand:  While good design and research practices matter, and mock trials are able to yield many conclusions that are heuristically valuable and of great practical use, only in limited circumstances are mock trial researchers able to communicate a finding and follow it up with, “…and that is true at a point-o-five level of statistical significance.”  In this post, I provide my own thoughts on the modest role of statistical methods in the small group research context of most mock trials and focus groups, helping the litigator/consumer find the best middle ground between two extremes:  promoting an indifferent “methods don’t matter,” perspective on the one hand, or promoting a narrow “what is statistical is what is true” conclusion on the other.  Both positions are equally damaging to the intelligent use of mock trial research.

Statistical Approaches Don’t Generally Fit Mock Trials:

Approaches to running and analyzing mock trials differ, and that isn’t a bad thing.  With an interest in emphasizing high standards (a goal I emphatically agree with), some researchers will stress a need to statistically justify the findings of any mock trial project (an application I emphatically disagree with).  High standards are vital, but standards need to be appropriate to the project’s design and expected utility, and levels of significance cannot be the sine qua non used to justify recommendations.

“Statistical Significance” is often interpreted in the public mind as a shorthand for that which is true, reliable, and important (von Roten, 2006).  In reality, it is simply a measure of the chances that differences observed in a sample are due to sampling error — an accident of who is picked — instead of being caused by the factor under investigation.  Even that measure rests on a number of assumptions that are unlikely to bear out in most mock trial research. 

1.  Random Sampling:  Does every member of the population have an equal chance of being chosen for the sample?  In the case of mock trials using database recruits, those who answer ads, or other volunteers, the answer is definitely “no.”  Yet even for mock trials that rely on randomized selection (our practice), there is still the bias of who is home to answer the phone, who chooses to answer, and who agrees to participate. 

2.  Ecological Validity:  Do the methods employed in research approximate the real-life situation under investigation?  Clearly, there are many differences between an actual trial and a mock trial simulation. “The uncontrolled and uncontrollable variables,” notes Doug Keene in a comment to a recent post in Deliberations, include “representativeness of presentation, environmental factors such as evidentiary rulings, judge’s whimsy about hardship issues and strikes for cause, and the talent or charisma of opposing counsel, to name a few.”  These factors prevent a clear claim of ecological validity in most attorney work-product mock trials.  Recent commentary (e.g., Wiener, Krauss & Lieberman, 2011) notes a number of questions even adhering to the more systematic “jury simulations” that are used as the basis for academic articles in peer reviewed publications like Law and Human Behavior.  The fact that mock trials inevitably differ from actual trials in duration, detail, and structure, is an important limit on their statistical generalizability. 

3.  Adequate Sample Size:  Are there enough study participants to allow one to generalize to the population?  With the smaller groups typically used in mock trials, relationships between variables that are tested will often be insignificant, not due to an absence of a relationship necessarily, but due to the absence of an adequate sample.  The smaller the effect you’re trying to measure, the larger the sample it takes to reliably measure it.  For very large differences, you can see results in samples as small as 30, or even less.  But for the generally more nuanced differences associated with communication approaches, you would need a project with far more mock jurors in order to measure it well. 

4.  Control Group:  When results are looked at experimentally — testing the effect of a specific variable — then there needs to be a control group that is not exposed to that variable.  When testing potential trial strategies in a mock trial, however, you are generally looking at one approach.  Applying statistical analysis to the conclusions of that would be like scoring a pharmaceutical drug trial when everyone had received the drug and no one had received the placebo.  Deciding that a specific part of the message worked or didn’t work is a judgment call made after evaluating the mock jurors’ feedback and deliberations, and not a statistically-governed finding.  There are exceptions, where for example a mock trial will build in scenarios to test (e.g., three juries see the case with the possibly precluded smoking gun memo, and three juries see the same case without it), yet in those cases, any significant conclusions would be limited to the single variable manipulated and not to all of the remaining parts of the case story. 

These limitations are well known and rather obvious to those who work within the litigation consulting field.  When, despite that practitioners frame the mock trial results in the language of statistics, the danger is that they are doing so for the purpose of mystification:  They are trying to convey a special or magical importance to the mock trial results.  That is the opposite of the realistic and practical advice that litigators actually need.

Still, There’s a Limited Role for Statistical Analysis of Mock Trial Results:

Even if a mock trial researcher were able to address each of these limitations, any statistical conclusions would, at most, be a statement about the population from which the jury will be drawn, and not a statement about the selected jury.  That is because juries are never selected randomly, but are instead shepherded though the intensely nonrandom process known as voir dire.  Your selected jury will, if both sides are doing their job, look quite different from your raw venue population, preventing any statistical conclusions from being applied to the seated jury itself.

But that does suggest one specific area where it can, in some circumstances, be meaningful to look at statistical results stemming from a mock trial:  jury selection.  When you have a sufficient number of mock jurors, it can be meaningful to correlate various mock juror attitudes with the leanings and verdicts observed in the mock trial, and to generate a selection strategy based on those relationships.  That approach has its limits when applied to just the results of a single mock trial with twenty-five to thirty participants, but we have outlined in a prior post about a multimethod approach that can be fairly expensive, but does generate statistically meaningful patterns that can be applied to jury selection.  But this approach differs quite a bit from the conventional mock trial.  

And an Unlimited Opportunity for Mock Trials to Provide Benefits At the Qualitative Level:  

Those who equate “research” with “statistics” might wonder at this point, “If there are so many limits to the statistical generalizability of mock trials, then why do them?”  The answer is simple:  Because they are useful.  Not all research-based learning stems from the relationship of numbers that we call “statistics.”  Most mock trials fall under the category of “qualitative research,” or methods that aim for a deeper understanding of human behavior, in particular, the reasons for behavior.  A well-designed mock trial allows researchers and clients to focus on the reasons that mock jurors find persuasive, and use to influence each other in deliberations.  By looking at the content and the patterns of mock jurors’ own attitudes and reason-giving behavior, the project generates findings and recommendations that don’t depend on any statistical stamp of approval.  Sophisticated mock trial users understand that a pattern observed in a mock trial won’t necessarily be a pattern that is repeated in the actual trial.  But it is still useful heuristically, in generating ideas, assessing approaches, and seeing the range of possibilities. 

Still it is important to remember that the label “qualitative” isn’t an invitation to purely subjective philosophizing.  Methods should still be systematic.  Some of the systematic yet qualitative methods employed in mock trials include:

  • Submitting open-ended questionnaire responses to content analysis to identify frequency and pattern.
  • Looking for meaningful correspondence between attitudes and actions.
  • Creating a taxonomy of the arguments mock jurors offer for and against a party or position.
  • Noting the difference between claims that are understood and used by mock jurors, and those that are forgotten or unclear.
  • Identifying the reasons behind mock jurors’ shift of opinion.

There are many others – all are useful, but none are foolproof.  The bottom line is that it is always a good idea to ask us researchers the grand epistemological question:  How we know what we claim to know? 

Reasonable consultants, researchers, and clients can and do differ over exact methods, but when in the role of consumers in this context, litigators should understand that qualitative research, like mock trials, should be valued for their common ability to generate useful and meaningful results, and not for their limited ability to generate statistically significant conclusions.  

________________

Other Posts on Uses of Mock Trial Research:

____________________ 

Wiener, R.L., Kraus, D.A., & Lieberman, J.D. (2011, June 27).  Mock Jury Research: Where Do We Go from Here?  Behavioral Sciences and the Law.  DOI: 10.1002/bsl.989, Link: http://onlinelibrary.wiley.com/doi/10.1002/bsl.989/full

Photo Credit:  Melomane, Flickr Creative Commons (with alpha symbol added by the author).  For a cool effect, move your head closer to or farther away from the image

May 12, 2011

Don’t Count on Gender Differences When it Comes to Compassion

By: Dr. Ken Broda-Bahm –

Male and Female 2

Ken_044 We are often asked, “What kind of jurors do we want for our case?” and sometimes that question can veer toward demographics:  “Do we want women or men?” In personal injury litigation, for example, the lawyers trying the case might suspect that women will show more compassion and sympathy toward an injured party, and want to keep (if they’re plaintiffs) or strike (if they’re defendants) female jurors for that reason.  Not only does the Supreme Court tend to frown on the gender-based use of peremptory challenges (see J.E.B. v. Alabama ex rel T.B., 511 U.S. 127, 1994), but we also stress that social science is on the same side.  Demographics is only a very small slice of what a juror brings to your case.  Their case-relevant experiences, and most importantly their attitudes, will be far more important in identifying the high-risk jurors that are most likely to be predisposed against your case from the start.

But no sooner are we able to get out that explanation, when along comes another study that seems to show that demographics might be predictive after all.  The latest example (Mercadillo et al., 2011) stems from a brain scanning study conducted by Mexican researchers that compared the brain activity of women and men as they looked at photographs that showed people who were sad or suffering.  In the brains of the women, the compassion inducing photographs caused greater activity in several parts of the brain (thalamus, putamen, and cerebellum) that indicate that the women “engaged in more elaborate brain processing” and showed “greater emotional sensitivity” when exposed to the kinds of images that evoke compassion.  So does this mean that the demographic determinists are right, and personal injury attorneys should try to tip-toe around court precedent in order to influence the gender composition of their jury?  Brain scans notwithstanding, we still say that the answer is no. Continue reading

March 31, 2011

Assess Your Juror’s Economic Security: A Vulnerable Juror Can Make for a Vulnerable Defense (Part One)

By: Dr. Ken Broda-Bahm –

Dollar
Ken_044The situation has been noted with a surprising frequency:  Instead of filing in quietly to fulfill their civic duty, prospective jurors in voir dire have expressed a deep frustration over the litigation process and a deep concern over serving.  Most recently, an article in the National Law Journal noted this month that in some cases, prospective jurors have been on the verge of open rebellion, and many have suspected that economic pressures have played a very significant role.  After all, for many of those who have a job and fear losing it, the prospect of being absent from work for several weeks while sitting in judgment over someone else’s fortunes doesn’t necessarily sit well.  As trial consultants who have sat in recent trials around the country, we’ve certainly noticed a few trends:

  • A growing number of true (or at least passionate) hardship claims based on employment,
  • An increasing willingness among many judges to be lenient on work-related hardship, and
  • An irritation among some in the pool over the litigation process itself.

What these simple observations and the recent comments in the legal media don’t get to, however, is the most interesting part: Given that juries will still be seated, what is the effect of the revised composition once those with hardships are dismissed?  Based on some recent research, as well as our own data, we feel that the net result is generally to aid defendants.  There are two reasons for this.  One, according to psychological analyses, those economically vulnerable individuals who are most likely to win a hardship claim are also more likely to buy into the pro-plaintiff mindset referred to as an “external locus of control,” which is basically the opposite of a high “personal responsibility” point of view.  Two, our own national survey research has shown that those most likely to believe themselves to be economically harmed and/or vulnerable are the same individuals who are likely to harbor the greatest anti-corporate bias, which is most likely to play against defendants.  Thus, if your judge is more open to hardship claims, then the result may be to create a more pro-defense panel. Continue reading

March 21, 2011

Put Your Jury Selection on Steroids by Leveraging Pretrial Research: Lessons from the Barry Bonds Trial

By:  Dr. Ken Broda-Bahm –

Syringe
T-shirt 2This post is focused on bulking-up your ability to target high-risk jurors and performance enhancing your voir dire.  So speaking of steroids, let’s start with Barry Bonds.  Jury selection for the perjury trial of the former San Francisco Giants power-hitter, charged with lying to a grand jury over steroid use, starts this week.  Prospective jurors will fill out a 19-page questionnaire focusing on the factors that both sides believe should help to reveal bias and guide the process of exercising cause and peremptory challenges.  But how reliable is the information underlying these questions?  A recent New York Times online article contains a curious contrast of opinions on the question of how tightly San Franciscans will cling to their opinions on Bonds.  Howard Varinsky, a Jury consultant, famous for his work in high profile trials like Michael Jackson’s, says “things have changed…” and a lot of people have “grown very ambivalent” on Bonds.  Another consultant, Chris St. Hilaire, however says that opinions are likely to have remained very strong:  “finding someone who doesn’t have an opinion about Barry Bonds is like finding a cowboy who doesn’t have an opinion about a horse.”  So who is right? Continue reading

February 21, 2011

Pay Close Attention to the Big Mouths in Voir Dire

By: Dr. Ken Broda-Bahm –

Big mouth

Broda_Bahm_Ken_88_120

So your case is in, your jury is ready to start deliberating, and you feel pretty confident that at least the majority of your jurors favor your side of the case.  Should you feel safe?  Of course not, because the verdict isn’t in the hands of the majority as much as it is in the mouths of those with the loudest and most persistent voices.  When conducting mock trials, we see it over and over again: The individual verdict preferences we measured before the start of deliberations don’t reflect the apparent consensus that can emerge in even the earliest moments of deliberations.  Jurors will try to get a read on what the majority thinks and many but not all will shift their views to align with that majority.  But according to some new research, a repeated viewpoint – even if held by only one person – can have nearly as much influence on the group consensus as a commonly held viewpoint.  In other words, the big mouth on your jury can start to seem like a majority to the rest of the jurors.  Continue reading

Related Posts Plugin for WordPress, Blogger...