By Dr. Ken Broda-Bahm:
It is increasingly expected that attorneys will conduct a mock trial, particularly in high stakes litigation. There are, however, different perspectives on what it means to conduct a mock trial, and litigators as mock trial consumers should be aware of these differences. On the one hand, some would emphasize a "do what works" perspective focusing on simplicity, practicality, and a lower cost. Others would say, "do only what is reliable and valid," emphasizing that especially when the stakes are high, research standards should never be sacrificed for expedience. While we here at Persuasion Strategies definitely lean in the latter direction, the discussion is not as as simple as dismissing one perspective in favor of the other.
Instead, I believe that there is value to be found in both perspectives, and perhaps some conditional truth to be found between. Imagining a conversation of sorts between these two ways of looking at and using the mock trial process, this post will aim at capturing a dialogue. Knowing the labels aren't perfect, this conversation will contrast "Quality" with "Practicality" when it comes to the creation of a mock trial. So if we imagine the two perspectives sharing the dais, and being questioned by a moderator, the interaction might go something like this.
What is a Mock Trial For?
Quality: A mock trial is a research method. It is a way to find out with some reliability what works and what doesn't.
Practicality: To some extent, but not really. What lawyers are looking for is practical advice and help, not a scientific study.
Quality: Sure, but to be practical and helpful, the results need to have some reliability. Otherwise, it is misleading.
Practicality: But a mock trial is never fully reliable. Remember the Wiener et al. article from last year? Even the "trial simulations" used to back up peer-reviewed articles in Law & Human Behavior, and other publications, lack many of the basics of reliability.
Quality: That is true, you should never use a mock trial to predict your result, as we've written before. But if you are using it to inform your approach and your strategy, it should follow the best research practices that are available.
Practicality: Okay, and by that analysis, as long as we are realistic about our limits, then attorneys should be able to choose what meets their needs and not be limited by just the methods that give the highest scientific value.
Should You Rely on Statistics?
Practicality: Sure, if you collect it, why not analyze it?
Quality: Yes, you should analyze what you collect, but with a couple of important caveats. With the exception of very large or repeated mock trials, you probably don't have the sample sizes to support claims of statistical significance. That means that you can't equate "important" with "significant at a .05." The second caveat is that you can't let a focus on numbers obscure the story that can't be told in numbers: the qualitative experience of listening to the mock jurors.
Practicality: But as far as describing what is going on within your mock trials, the numbers tell a story as well. You can watch the numbers on leaning, for example, change from presentation to presentation, and that can lead to some practical ideas.
Quality: Sure, as long as you are describing your own sample and not generalizing to the population or to the future jury, the numbers certainly give you a useful benchmark.
How Closely Can You Replicate the Conditions You Expect At Trial?
Quality: As closely as possible. There are some things that have to be different -- like allowing attorneys to engage in summary argument instead of strictly following the rules of evidence. But in most instances, you want to test the messages you expect in trial.
Practicality: That assumes that you know what to expect in trial. At the time of mock trial testing, there is often a plethora of questions that haven't yet been resolved: admissibility, Daubert, and summary judgment on specific claims to name a few.
Quality: And in those cases, you should default to a worst case test. It is always nice to find out that it is going to be better than you expected it to be, but nothing guts the utility of a mock trial like finding out that bad information is now coming in that you didn't test.
Practicality: That makes sense -- when in doubt, default to a reasonable worst-case scenario for your client.
Where Do You Find Mock Jurors?
Quality: This is a big one. A mock trial is only as good as its participants -- garbage in, garbage out. That means that mock trial participants should be randomly recruited, and confirmed based on quotas reflecting the makeup of your venire.
Practicality: That sounds good, but in practice, it is expensive and time-consuming to get randomly selected recruits to participate. You need to pay them more, you need to pay the recruiter more, and the whole process needs more time. That can be a luxury for many cases.
Quality: Yes, but that is what it takes. A mock trial is already a large expense, in both direct outlays and in attorney time. If you are using unrepresentative jurors, you are jeapardizing the value of everything else that went into the mock trial.
Practicality: But individuals who answer ads, or who are members of a database, or even 'friends of friends' are still non-lawyers who know nothing of the case and have no dog in the fight. They still might have a useful or valuable perspective.
Quality: That is conceivable, but there is one thing that sets the random recruit apart from all of those: They didn't volunteer. The act of putting oneself forward for a project makes one different in important ways - research continues to show that (e.g., Hwang & Fesenmaier, 2002). Those who are called for jury duty don't independently sign-up, instead they are pulled from their lives. Random recruitment does the same thing. It is true that a volunteer might have the same reactions, but here is the problem: You can never tell which responses are representative of a jury's view and which are the ideosyncratic responses of a "professional" survey taker or a helpful volunteer.
But Is It True that 'Something Is Better Than Nothing?'
Practicality: Absolutely! For a small case, or an attorney with a small budget, it simply isn't helpful to focus on research purity if that makes a mock trial unaffordable.
Quality: Yes, it can be better than nothing, but it can also be worse. For example, if you conduct a poorly designed mock trial using unrepresentative mock jurors, that could generate some pretty wild conclusions that lead you in the wrong directions for trial. It would be better in cases like that to rely on your experience, or on consultation rather than research.
Practicality: But attorneys tend to be pretty smart folks. If they are aware of the limits and get good advice on what they should and shouldn't do with the mock trial results, then what's the harm?
Quality: It isn't impossible...but it is hard to not rely on what you've seen. Attorneys are smart, but when you watch a mock jury discussing your case, it is very easy to remember it and apply it to your case, and it is hard to say after the fact that, "well, that may not have been representative."
So What is the Bottom Line?
Quality and Practicality: The solution is to know your constraints. It is true that not every case will have the budget for mock trial research. But it is also true that research that doesn't meet some basic standards can be worse than no research at all. The bottom line that we would suggest is this: If you are going to call it "research" and treat it as research, then it should meet some minimum standards, and we would put random recruitment, for example, within that category.
But we recognize that there are other situations where research isn't really your goal, and what you want instead is a practice session or an informal sounding board. A mock voir dire is a good example of that. To test out your attorney conducted oral voir dire, it is overkill to bring in random recruits. What we want is just the feeling of asking the questions and a basic sense of whether they are understandable and lead to a useful dialogue. We don't need to predict how the actual panel will answer, we'll find that out when we ask the actual panel. The same approach of using what the academics would call a "convenience sample" of volunteers instead of random recruits could be used for an opening statement or a witness examination, of course, but in those cases we always suggest not including the other trappings of mock trial research (questionnaires, deliberations, etc.) and keep it to an informal Q and A afterward. That way, you are keeping your purpose clear and reducing the risk of misuse.
So the recommendation is this: Know your purpose. If you plan to examine the results and use that analysis to inform your trial strategy, then it is research and should be treated like research. If on the other hand you want an informal shakedown cruise, then that can be a reason to relax your standards as long as it is also a reason to tighten your communication of your purpose.
Other Posts on Mock Trying Your Case:
- Don't Be Entranced By Statistical Claims From MockTrial Research
- Create the Conditions for a Creative Trial Strategy
- Be More Realistic Than Your Opponent
Hwang, Y. H., and D. R. Fesenmaier (2002). Self-Selection Biases In The Internet Survey: A Case Study Of A Tourism Conversion Study, Proceedings, The Annual Conference, Travel and Tourism Research Association, Arlington.
Wiener, R.L., Kraus, D.A., & Lieberman, J.D. (2011, June 27). Mock Jury Research: Where Do We Go from Here? Behavioral Sciences and the Law. DOI: 10.1002/bsl.989, Link: http://onlinelibrary.wiley.com/doi/10.1002/bsl.989/full
Image Credit: bjornmeansbear, Flickr Creative Commons