By Dr. Ken Broda-Bahm:
There was a time when conducting a mock trial in order to prepare your case was a novel idea. We are long past that now, and even at the point where a mock trial is considered a fungible product. Attorneys will often, in effect, put a project out to bid based on a belief that one group’s mock trial is very much like any group’s mock trial. But the field is not yet at that point of consistency. Mock trials are not all created equal, and there are still some fairly substantial differences in approach that distinguish one group’s mock trial from another’s. In addition to quality differences, there are also variations in design that reflect the philosophical commitments of the researchers or are customized to the clients’ purposes.
These days, attorneys should be smart consumers and should not impose a false parity on consultants selling mock trial services. For our part, we at Persuasion Strategies believe that we’ve been very open about our commitments, and sometimes those research design discussions have found their way into this blog. Here are the top seven posts so far focusing on some critical considerations and distinctions in mock trial design.
It is increasingly expected that attorneys will conduct a mock trial, particularly in high stakes litigation. There are, however, different perspectives on what it means to conduct a mock trial, and litigators as mock trial consumers should be aware of these differences. On the one hand, some would emphasize a “do what works” perspective focusing on simplicity, practicality, and a lower cost. Others would say, “do only what is reliable and valid,” emphasizing that especially when the stakes are high, research standards should never be sacrificed for expedience. While we here at Persuasion Strategies definitely lean in the latter direction, the discussion is not as as simple as dismissing one perspective in favor of the other. Instead, I believe that there is value to be found in both perspectives, and perhaps some conditional truth to be found between. Imagining a conversation of sorts between these two ways of looking at and using the mock trial process, this post will aim at capturing a dialogue. (Read more).
I have learned from talking with clients that the phrase “mock trial” can refer to many different things. There is a common core — mock jurors hearing parts of a case and deliberating while you watch — but beyond that, the way that it is executed can vary quite a lot. So for this post, I thought I would share my own list of 13 “best practices.”
1. Randomly Recruit Your Mock Jurors. The quality of your results will only be as good as the quality of your participants going in. As I have written before, mock jurors who are randomly recruited will be more like your actual jurors than any pool gathered from a database or, worse, from a “Friends and Family” panel. Results from those poorly recruited projects can be worse than nothing, because they can be misleading. The best practice is still to rely on a mechanism, like Random Digit Dialing (or RDD), to ensure that every member of your target population has an equal chance of being contacted for the project. (Read more).
At a conference, I once met another consultant who actually claimed that his mock trial findings would line up with actual trial results with a definite confidence interval of, he said, plus or minus five percent. It was one of those conference moments when you realize that you urgently need to talk to someone else, because to anyone with even a rudimentary understanding of research methods, the claim was absurd on its face. A mock trial will not predict your actual trial results for a number of reasons. But beyond that, there are important limitations on the ability to apply statistical tools to the results of mock trial research. This is a critical point for litigators, as mock trial consumers, to understand: While good design and research practices matter, and mock trials are able to yield many conclusions that are heuristically valuable and of great practical use, only in limited circumstances are mock trial researchers able to communicate a finding and follow it up with, “…and that is true at a point-o-five level of statistical significance.” (Read more).
This post is focused on bulking-up your ability to target high-risk jurors and performance enhancing your voir dire. So speaking of steroids, let’s start with Barry Bonds. Jury selection for the perjury trial of the former San Francisco Giants power-hitter, charged with lying to a grand jury over steroid use, starts this week. Prospective jurors will fill out a 19-page questionnaire focusing on the factors that both sides believe should help to reveal bias and guide the process of exercising cause and peremptory challenges. But how reliable is the information underlying these questions? A recent New York Times online article contains a curious contrast of opinions on the question of how tightly San Franciscans will cling to their opinions on Bonds. Howard Varinsky, a Jury consultant, famous for his work in high profile trials like Michael Jackson’s, says “things have changed…” and a lot of people have “grown very ambivalent” on Bonds. (Read more).
“What’s the “credibility fish,” you ask? It is the shape in the image above: The graph that is made when measuring the credibility of each of two parties over the course of a simple mock trial. We ask the mock jurors to rate each party’s credibility on a scale that ranges from 7 (highest) to 1 (lowest) at each of three phases. After they’ve heard only from the plaintiff, a few will have reservations or give the defendant the benefit of the doubt. But most should give strong credibility to the plaintiff and weaker credibility to the defense. Then, after hearing from the defense, that relationship should reverse itself. Hearing the rest of the story, most should see some problems in the plaintiff’s case and some merit in the defense. Finally, after hearing the plaintiff’s rebuttal, those ratings converge toward a midpoint. As the mock jurors head into deliberations, understanding of the two sides should even out, setting the stage for robust disagreement. Graphing those shifting ratings, what you get is the shape of a rightward facing fish, just like the Christian car decal (but not the “Darwin” version with the feet on it). (Read more).
So, you’re conducting a mock trial and eyeballing the participants as they check in. Gradually, it strikes you: Some seem to know the drill a little too well. Later in the day during deliberations and interviews, it is all the more clear: Some have the wide-eyed and uncertain look of actual jurors, while others appear to be veterans, familiar with the research norms and the facilitator’s expectations. The latter group may be what we call the “frequent flyers,” or those who have participated in many focus group projects and see such participation as an important source of income. They’re also a group that, for many reasons, you want to avoid. Since the entire point of conducting small group legal research like mock trials or focus groups is to hear from individuals who are substantially similar to your potential jurors, it skews your results and reduces the utility of the exercise. Instead of hearing from those who are like the true cross section of jurors, you are hearing from those who are more like professional survey takers. (Read more).
here is a disconnect between awareness and reality when it comes to the level of information presented in complex civil litigation. On the one hand, experienced litigators know that jurors can only absorb so much, and will be deciding a case based on the peaks and not the nooks and valleys you give them. But, too often, that understanding is shattered in the face of voluminous exhibit and “will call” lists. In broad terms, we understand that more isn’t necessarily better, but in practical terms, knowing what to limit can be very difficult. A classic study (Bastardi & Shafir, 1998) confirms our intuition that more information doesn’t necessarily help decision making and can even lead to worse results. The Princeton and Stanford researchers manipulated the level of information received in a loan application situation, and demonstrated a distracting effect once they complicated the scenario. Noting this as “a troubling blind spot in the way we make decisions,” psychologist Ron Friedman wrote recently in the Glue blog for Psychology Today, the study “highlights the downside of having a sea of information available at our fingertips.” (Read more).
Other “Best of” Posts:
- Know Your Techniques of Persuasion (Top ‘Practical’ Posts)
- Be Well Read (My Favorite Litigation Persuasion Blogs)
- Top Products Liability Posts
Image credits are included within each post