Tag Archives: mock trial

February 25, 2013

Don’t Ride with ‘Frequent Flyers’ in Your Mock Trial Research

By Dr. Ken Broda-Bahm: 

5478215570_526ff657d3_b
So, you’re conducting a mock trial and eyeballing the participants as they check in. Gradually, it strikes you: Some seem to know the drill a little too well. Later in the day during deliberations and interviews, it is all the more clear: Some have the wide-eyed and uncertain look of actual jurors, while others appear to be veterans, familiar with the research norms and the facilitator’s expectations. The latter group may be what we call the “frequent flyers,” or those who have participated in many focus group projects and see such participation as an important source of income. They’re also a group that, for many reasons, you want to avoid. Since the entire point of conducting small group legal research like mock trials or focus groups is to hear from individuals who are substantially similar to your potential jurors, it skews your results and reduces the utility of the exercise. Instead of hearing from those who are like the true cross section of jurors, you are hearing from those who are more like professional survey takers. 

Lawyers, especially those who are trying to cut costs in pretrial research, might be prone to believe that feedback is feedback and the reactions of any available warm bodies ought to be enough. But that could not be further from the truth. The choice to engage in jury research is already a kind of compromise to the extent that you are probably not bringing in the kind of numbers that Gallup would rely on. To jeopardize the representativeness even more by using ringers as participants is definitely a bad idea. In this post, I lay out the case for avoiding “frequent flyers” and other opt in participants and, instead, basing your research on those who are recruited using the most systematic and randomized techniques available. 

 A Big Problem (That May Get Bigger)

There is no question that the urge to use canned research participants is driven by economic considerations since randomized recruiting takes more time and is somewhat more expensive. Certainly it makes sense to look for ways to trim the mock trial and focus group research costs so it is more accessible across a broader spectrum of cases and clients. Messing with the participants, however, cuts at the heart of what makes the research useful. Nevertheless, the practice of using respondents drawn from a survey company’s database of volunteers seems to be very common. While most responsible researchers will still try to find ways of screening out the frequent flyers, the practice dramatically increases your chances of having mock jurors who are fundamentally dissimilar to your eventual jurors. 

As research increasingly moves online, the problem can be expected to increase. In an article in The Jury Expert, Brian Edelman recently wrote about the special challenges the format poses to reliability. “Most online surveys use a nonprobability sampling technique based on ‘opt in’ panels,” Edelman writes, and “when the individuals who join such panels are different in important ways from those who do not, samples are not representative of the jury pool.” Based on a task force report produced by the American Association for Public Opinion Research (AAPOR, 2010), there are some key differences that are troubling from a research perspective. Edelman, for example, reports that many online survey vendors, driven by increasing demand for survey takers, are recruiting in less traditional ways, incentivizing participants not with plain old cash, but with online gaming credits. And a research participant who is there just to build their stock of imaginary animals for Farmville is likely to differ from the typical juror in some important ways. And because they’re gamers, the idea of gaming the system is not out of the question. Edelman quotes one message posted on an online message board for “Survey Crack Heads,” or hard-core survey takers: 

Yo guys listen up!!! … ALLWAYS MARK YES IN THE FIRST QUESTIONS cuz if not, you will not be able to qualify for the survey” (Sic).

That may, or may not, be an extreme example, but there are some important differences between frequent research participants and the general population.  

What Sets Frequent Flyers Apart? 

If using database participants and other volunteers was just a matter of getting the same type of people more quickly and easily, then it wouldn’t be a problem. But it is a problem because the participants differ in a number of ways. 

Demographically and Psychologically Distinct

As you might expect, those who step forward and say, “I’d like to be hired for research studies” are not a demographic cross section of the population. The AAPOR report mentioned earlier AAPOR, 2010 summarizes, “A large number of studies have compared results from surveys using nonprobability panels with those using more traditional methods, most often telephone. These studies almost always find major differences.” The report goes on to say that differences may be due to the way the surveys are administered (computer versus phone) or they may be due to differences in samples. There is a need for recent and mock jury specific research in this area, but I did find two studies suggesting the sample may be the dominant part of the problem. Hwang & Fesnmaier in 2002 looked at differences between those who were willing or unwilling to provide online contact information – often a precondition to becoming part of a recruiting or online survey database – and found dramatic differences in demographics, behavior, and psychological characteristics. Another study (Ganguli et al., 1997) directly compared random recruits to volunteers for a community health study and found educational, cognitive, gender, and health behavior differences. So the intuition that those who step forward differ from those who are picked does have some research behind it. The recruiter could respond to these differences by using demographic quotas to ensure the recruited sample matches the population, but that won’t catch the behavioral and psychological differences that also set the volunteers apart. And, more practically, there are other problems. 

Less Motivated (and Less Likely to Show)

One might presume the opposite, that those who volunteer or work with some regularity as a research participant might be more motivated and reliable — they’re pros after all. But based on the recruiting experience, the opposite seems to be the case. Anecdotally, we have heard from our own recruiters that random recruits tend to have a slightly higher show rate than database respondents, and that may be because those on a database feel that they will have other opportunities in the future. If the first-time flyers are going to be not only more representative, but more reliable as well, then this has a direct effect on the quality of your research. 

More Savvy (and More Likely to Role Play)

Though they may be a little less likely to show, they may try harder and that presents different problems. Nonrandom recruits, Dale Hanks notes, “sometimes are also much more aggressive in trying to qualify. They have experience with the screening process and through the years may have been through dozens of screeners.” That creates a savvy that could be dangerous to your confidentiality and screening needs. “We’ve even heard them ask the agents to tell them what the correct answer is so they don’t disqualify.” Once they get past the door, they may also participate differently. Someone with prior experience may develop a sense that they know what the facilitators are looking for, and in a group discussion that kind of person could be prone to disagree just based on the perception that the researchers like it when you mix things up a bit. If they’re thinking, “How can I get this gig more often?” then a desire to please the researcher could end up infecting the research results. 

The Ineffable Difference: Self-Selection

Ultimately, though, the key problem with repeat responders and others from opt in panels is something that may not be easy to measure or define: They’re simply different because they volunteered. That isn’t what happens in actual jury duty where your average citizen’s space is interrupted by a summons. They don’t choose it, it chooses them. Now, mock trials can’t try to mandate attendance the way a court can, but to me it still makes a difference that participants are being found rather than stepping forward. Even if participants turned out to be the same on every metric we could measure, that difference in self-selection would still be a disqualifier for me.

So How Do You Avoid Frequent Flyers? 

Probably the most common way consultants have of weeding out the frequent flyers is through screening, by asking, “Have you participated in a mock trial or a legal focus group before?” Based on a question like that, many consultants will take only research “virgins” in the sense that they’ve never participated in a mock trial or legal focus group before. That solves some problems but not others, and the savviest among frequent flyers may even learn to downplay their prior experience in order to preserve their ability to participate. The best option is to follow the court’s practice and draw randomly from the jury-eligible population in your venue. As Dale Hanks argues, “Simple science shows the random group is the closest simulation which obviously provides the most accurate science in your data.” The main reason consultants and clients have for not recruiting using a random method like “Random Digit Dial,” is cost, but Dale Hanks, having routinely recruited using both random and nonrandom methods, shares the experience that the “difference in the recruiting costs rarely exceed $1000,” and such a relatively minor portion of your trial preparation budget is generally going to be worth it in order to get the closest simulation to the actual venue. The practice of recruiting by random digit dial can also work for online research. Even though the project takes place via the internet, participants can still be selected and screened via telephone. That helps to ensure that your mock jurors actually live in your venue and to also have the benefit of personal contact during the critical screening stage. 

The bottom line is a conclusion that applies to many of the choices you make in conducting pretrial research: Your results are only as good as your methods. Garbage in, garbage out.

____________________

Other Posts on Mock Trial Method: 

____________________

AAPOR Report on online Panels (2010). Prepared for the AAPOR Executive Copunsil by a Task Force Operating under the auspices of the APOR Standards Committee. URL: http://www.aapor.org/AM/Template.cfm?Section=AAPOR_Committee_and_Task_Force_Reports&Template=/CM/ContentDisplay.cfm&ContentID=2223

Hwang, Y. H., & Fesenmaier, D. (2002). Self-selection biases in the Internet survey: A case study of a tourism conversion survey. Unpublished manuscript. http://fama2. us. es8080.

Photo Credit: cogdogblog, Flickr Creative Commons

January 21, 2013

Don’t Pull the Plug on the American Civil Jury Just Yet

By Dr. Ken Broda-Bahm: 

4881836245_1a2691cf36

There is a body lying on the pavement. It is still twitching a bit, but fading fast. “This was no accident,” says the hard-boiled detective, “this was an attempt at premeditated murder…and it just might succeed.” If instead of “body” we’re referring to the American civil jury, and instead of “hard-boiled detective” we’re referring to a new article in the Yale Law Journal, then the scenario is roughly the same. The analysis, from Yale legal history professor John H. Langbein (2012), notes the dramatic decline in civil trials (now down to two percent of all case conclusions in federal courts and less than one percent in state courts), and ties that trend to a movement from a pleadings-based system in which facts were resolved in trial, to a discovery-based system in which facts are resolved not before trial, but largely without trial. This, Langbein argues, is a consequence of the 1938 Federal Rules of Civil Procedure and the civil jury is fading by design, if not by intent, because the reforms have largely worked. In other words, the American jury didn’t fall, it was pushed.

While these rumors of the civil jury’s impending death may not be greatly exaggerated, they may yet be premature. This is particularly true if we are focusing on the role of popular judgement at a level that is somewhat broader than the formal jury as we have historically conceived it. By expanding our focus a bit in order to account for the potential jury, the expanding use of the mock jury, as well as potential new models such as California’s expedited jury, there is still the chance that the legal vox populi might live to play a role in the future.

The Late Great Civil Jury? 

For fans of the American jury system as well as those who work within it, Professor Langbein’s article is a sobering read. Like many other commentators, he notes the sharp and accelerating decline in jury trials, noting that “we have gone from a world in which trials, typically jury trials, were routine, to a world in which trials have become ‘vanishingly rare.'” Unlike other commentators, however, he doesn’t link that decline primarily to the increasing costs of litigation or to the case management orientation of judges. Instead, he views the trend in more systemic terms. Noting that prior to the Federal Rules, trial was often the only way to accurately discover the facts of the case, he argues that the Rules have largely replaced “discovery by trial” with “discovery instead of trial.” While the focus is on what is called “pretrial procedure,” in practice, Langbein notes that it really amounts – in more than 49 cases out of 50 cases, to “nontrial procedure” instead. Based on a review by two Omaha attorneys (Domina & Jorde, 2010), “trial, and particularly trial by jury, is the least-used dispute resolution methodology in America.” Even as the Federal Rules have formally preserved the right to a trial, they’ve also created the conditions in which litigants find it unnecessary and often counter-productive to exercise their right to a trial. Citing Emerson’s ‘build a better mousetrap’ adage, Professor Langbein concludes “The Federal Rules built a better mousetrap: a civil procedure centered on pretrial discovery. Litigants no longer go to trial because they no longer need to.”

Even as every legal organization imaginable has created committees and task forces aiming to save the American jury, the systemic factors that Professor Langbein documents seem destined to persist. That doesn’t, of course, mean that the civil jury will soon, or even eventually, go away. Trials will continue, but those that make the cut are likely to become more and more unusual: cases that are higher stakes and cases that carry some kind of atypical barrier that has made settlement difficult or impossible. And as the matters that go to trial become less representative of cases overall, they’re also less able to serve as examples for the preponderance of disputes that are bound for settlement.

So, that raises a practical question for the great majority of cases that settle out of court: What is the benchmark? The case will settle based on something, and hopefully it is neither an arbitrary point between demand and offer, nor the equally arbitrary point at which the parties simply reach exhaustion. Facing the decline of the conventional civil jury, a future for popular adjudication may lie in finding innovative ways to create that benchmark.

A Continuing Role for Popular Adjudication

Even for those cases that will involve no ultimate jury, there is still a role to be played by the broader notion of public judgment.

1. The Potential Jury as Benchmark

Relatively few cases involve an actual jury, but a far larger proportion still involve the role of a potential jury. This includes all cases in which one side or both are preserving their right to a jury as an option. Like a silent party to the negotiations between the plaintiffs and defendants, the perception of what a jury in the venue would do if it heard the case exerts a strong pull on strategic positioning, case assessment, and settlement offers. The diminishing supply of actual comparison verdicts coming out of the courts provides a reason for attorneys to turn to specialists, and consultants are likely to increasingly fill that role.

2. The Mock Jury as Test

Particularly when dealing with larger or more complex cases, it has become the “standard of care” for a mock trial to be conducted prior to settlement, providing an opportunity for specific assessment to serve instead of subjective judgment. Using three or more juries composed of randomly-recruited citizens from the venue, a mock trial  exercise provides a foundation for case risk assessment and often for a settlement offer.  Frequently when a project concludes, the mock jurors will ask, “Is it possible for you to let us know what happens when the real jury hears it?” The correct answer is always, “No, we aren’t going to contact you again,” but what I often want to say is, “In all likelihood, you were the real jury…or at least as real a jury as this case will ever see.” And, if you think about it, that isn’t necessarily a bad thing: Whether the state calls in actual jurors or we recruit mock jurors, the case still gets its day in ‘court,’ of sorts, and still benefits from the leveling influence of popular judgment.

3. The Expedited Jury as Reality

One example of the actual court system appearing to draw inspiration from the mock trial method is California’s relatively recent experiment with a simplified and shortened format designed to preserve the option of a formal jury for a class of cases.  In 2010, the state legislature passed the California Expedited Jury Trials Act, creating an option that parties could enter into through mutual agreement: A one-day trial with stipulated exhibits and evidence, no appeal or post-trial motions, a jury of 8 citizens with no alternates, and a binding result subject to a high-low agreement. While the model has, up to this point, been used mostly with lower value cases like automobile accidents, the early responses to the method have been quite positive. Users participating in a recent survey (Cheng, 2012) “were very satisfied with their experience, and lauded it for its ability to reduce time and monetary costs for their clients and themselves.” There is no reason that this model or something similar couldn’t be applied to larger cases, and also no reason that mediators shouldn’t simply adopt the approach as part of a private dispute resolution process. As we’ve suggested before, if what is preventing an early settlement is the existence of differing perceptions of what an actual jury would do, why not bring in a mock jury in order to serve as that additional source of information or reality check for the parties and the mediator?

Back in the intensive care ward, the patient – the American civil jury – still isn’t looking so good. The formal role played by average American citizens in resolving civil disputes, unique among countries, definitely had a good run. But now it seems to be swiftly shifting into another role, focusing on fewer and less typical cases, as well as alternate avenues of influence. As the broader dispute resolution system adapts, it appears cautiously possible that a meaningful role for popular judgment will survive.

 ____________________

Other Posts on the Role of the Jury: 

____________________

Cheng, Y. (2012). A Law and Economics Approach to the California Expedited Jury Trials Act. Legal Studies Honors Thesis. University of California, Berkeley. http://legalstudies.berkeley.edu/files/2012/06/Cecilia-Cheng-Sp12.pdf

Domina, D. A. & Jorde, B. E. (2010). Trial: The Real Alternative Dispute Resolution Method. Voir Dire, Fall/Winter. http://www.dominalaw.com

Langbein, J. H. (2012). The Disappearance of Civil Trial in the United States. 122 Yale Law Journal 522. http://yalelawjournal.org/the-yale-law-journal/article/the-disappearance-of-civil-trial-in-the-united-states/

Photo Credit: RembergMedialimages, Flickr Creative Commons

December 10, 2012

Beware of TMI: More Information Doesn’t Lead to Better Decisions

By Dr. Ken Broda-Bahm: 

7254347346_acaedb3960_b

There is a disconnect between awareness and reality when it comes to the level of information presented in complex civil litigation. On the one hand, experienced litigators know that jurors can only absorb so much, and will be deciding a case based on the peaks and not the nooks and valleys you give them. But, too often, that understanding is shattered in the face of voluminous exhibit and “will call” lists. In broad terms, we understand that more isn’t necessarily better, but in practical terms, knowing what to limit can be very difficult.

A classic study (Bastardi & Shafir, 1998) confirms our intuition that more information doesn’t necessarily help decision making and can even lead to worse results. The Princeton and Stanford researchers manipulated the level of information received in a loan application situation, and demonstrated a distracting effect once they complicated the scenario. Noting this as “a troubling blind spot in the way we make decisions,” psychologist Ron Friedman wrote recently in the Glue blog for Psychology Today, the study “highlights the downside of having a sea of information available at our fingertips.” Yet it is precisely that sea that most litigators face in the space between the end of discovery and the beginning of trial. As you might guess, dumping it all on the judge, arbitrator or jury isn’t the best option given that additional data can sometimes dilute the rationality of decision making. It is quite a broad topic to consider how to practically (and responsibly) limit this information at trial. But before the trial comes the mock trial. So this post takes a look at the study and recommends some practical ways to limit the amount of information during pretrial research and pare down the complexity prior to trial. 

The Study: More Information, Less Rationality

The researchers, Anthony Bastardi and Eldar Shafir (1998) tested the information overload hypothesis by comparing two groups in their review of a single loan application. Both groups heard that an otherwise qualified loan applicant hadn’t paid his credit card debt in the last three months. Group 1 heard that this was a $5,000 debt. Group 2 heard that the exact amount of the debt was uncertain, but that it was either $5,000 or $25,000. Participants then had the option to approve, deny or get additional information. Most members of the second group, understandably, decided to wait on the additional information. When they did, they were told that it turned out the applicant’s debt was $5,000, the same amount the first group heard.

But even though they ended up with the same information — a loan applicant with an unpaid $5,000 balance — they made radically different decisions. Those in Group 1 who heard just the simple story denied the loan in 71 percent of the cases. In Group 2, however, they rejected in only 21 percent of the cases. In explaining why that second group accepted the loan at three times the rate, Friedman notes the possibility of a “cliffhanger” effect where the initially uncertain or unclear information “raises a metaphorical red flag and says, ‘Pay attention. This could be important.'”

Of course, there is another explanation for the result as well: the contrast effect. When participants heard that the applicant owed “only” $5,000, rather than the $25,000 he might have owed, it seemed like a strong point in the applicant’s favor, and it might have felt unfair to deny the application in the face of this favorable information. It might have also felt cognitively inconsistent to have delayed the decision in order to get additional information, to only deny the application after receiving information that was as good as it could have been in that scenario. But that may be the point: The additional information simply complicated the decision making by creating the opportunity for more biases to be brought into play. At a rational level, participants probably should have said, “Well, the debt is at least $5,000, so that is good enough reason for a denial.”  But the additional information, and the possibility of an even greater debt, just served to take their evaluation down other paths. 

Avoiding TMI in Pretrial Research

The same can happen in trial, of course. The more data decision makers have to react to, the more nuanced and complicated will be their reactions. From a purely logical standpoint, advocates will sometimes think, “If I have five great arguments for my side, then adding one more ‘good, but not great’ argument certainly can’t hurt.” But our reaction is, “Oh yes, it can hurt.” In arguments to the bench, for example, it may mean that your judge spends all her time picking apart that sixth argument and, as a result, talks herself into a more negative view of your case. The same can happen in deliberations: If the additional information just serves as a platform for a discussion that runs against you, then the net effect is to worsen your case. 

The calculation, of course, will be unique to each case. Information that is superfluous is one case might be absolutely essential in another. But one area where trial teams can, and certainly should, limit the information is during a mock trial. Because the research quite often takes place in only a day, or even less, cutting back is unavoidable. But rather than this being a limit on the usefulness of the mock trial, that act of paring things down can be one of the mock trial’s most important strengths. It is a chance to see what matters most, and what you learn in that context can spill over to trial. So here are a few ideas for cutting back on the level of detail in a mock trial. 

Exhibits: Try It With 20. We’ve conducted mock trials on cases involving just a handful of exhibits, and other cases involving tens of thousands of exhibits. But in nearly all cases, we ask each party to limit themselves to the twenty most important documents. We’ll typically provide those in individual notebooks so each mock juror can give them serious attention, but when the set is limited to just those we expect the panel to seek out and argue over, that leads to more focus and avoids document overkill. 

Witnesses: Test Them in 4 Minutes.  When a trial team is preparing witness testimony for use in a mock trial, they often face the daunting task of slimming down a day-long video-recorded deposition into a short snippet. How short? Our experience is “shorter than you would think.” There is some research (Zunin, 1972), for example, showing that our first impressions become solidified after about only four minutes of exposure to a stranger. In light of that, you don’t need the witness to convey all the key areas of testimony, since that can be summarized by the attorney anyway. What you need is just a small slice of relevant testimony: That will be enough for the mock jurors to form a durable credibility assessment. 

Summaries: Trim Them to 45 Minutes. This is the hard part. Since your actual case will unfold over a period of days or even weeks, how do you squeeze it down to a size that can be reliably tested in a mock trial? The answer is provided in part by mock jurors’ attention spans. Listening to just one speaker, even a good speaker, can be taxing. After about 40 to 45 minutes, we’ve noted that maintaining attention gets more and more difficult for mock jurors. Aside from that limit, however, keeping it to three-quarters of an hour is also a way to keep you focused on the central parts of the story and not the extraneous details. 

Verdict Forms: Keep It to a Couple Pages. You want to be realistic in the style of verdict and the questions that you test, but at the same time, you will see more productive and less frustrating deliberations if the mock jurors are reacting to just a handful of the critical questions. So thoughtful edits — combining some parties, or shortening the path of causation, for example — can often prevent your panels from getting bogged down and make sure the discussions you see are discussions on the issues that matter most to your case.  

The information you test in the mock trial isn’t the whole trial, but it is the core. And mock trials don’t predict real trial results, but chances are that participants in both settings will be reacting more to the broad outlines of the story than to the individual details. When we are fortunate enough to have done a mock trial and an actual trial for the same case, it is uncanny how often the final interview in the mock trial and the post-verdict interview in the real trial essentially boil down to the same discussion. What that means is that somehow the actual jurors cut through all the additional information they heard in the real trial in order to get back to that same core story tested in the mock trial. So once you’ve found your core, as much as possible you should try to stick with that. 

____________________

Other Posts on Comprehension: 

____________________

ResearchBlogging.org Bastardi A, & Shafir E (1998). On the pursuit and misuse of useless information. Journal of personality and social psychology, 75 (1), 19-32 PMID: 9686449

 

 

Image Credit: Saad Faruque, Flickr Creative Commons

May 26, 2010

Be More Realistic Than Your Opponent

by: Dr. Ken Broda-Bahm

 

Broda_Bahm_Ken_88_120If it isn’t already a saying, then I’ll try to make it one:  When a case gets as far as trial, it is because one side or the other is seriously misunderstanding their chances for success.  When both parties are realistic about their chances, then there is really no reason not to settle. 

If it is really that simple, then why are juries still being summoned across the country for trials?  A recent psychological study explains why:  it is probably both sides that misunderstand their chances for success.  Bottom line:  attorneys are notoriously bad at predicting their own case outcomes.  Continue reading

Related Posts Plugin for WordPress, Blogger...