Scholarship Update

Feldman

Professor Yuval Feldman

Recently published, Professor Yuval Feldman‘s new book, The Law of Good People: Challenging States’ Ability to Regulate Human Behavior, is a rich and comprehensive description of the state of behavioral ethics research and its impact on public policy and decision-making.

Here is the book’s abstract:

Currently, the dominant enforcement paradigm is based on the idea that states deal with ‘bad people’ – or those pursuing their own self-interests – with laws that exact a price for misbehavior through sanctions and punishment. At the same time, by contrast, behavioral ethics posits that ‘good people’ are guided by cognitive processes and biases that enable them to bend the laws within the confines of their conscience. In this illuminating book, Yuval Feldman analyzes these paradigms and provides a broad theoretical and empirical comparison of traditional and non-traditional enforcement mechanisms to advance our understanding of how states can better deal with misdeeds committed by normative citizens blinded by cognitive biases regarding their own ethicality. By bridging the gap between new findings of behavioral ethics and traditional methods used to modify behavior, Feldman proposes a ‘law of good people’ that should be read by scholars and policymakers around the world.

For an excellent review the book, see Richard Moorhead, Good People and the Ethics of Quiet Egocentricity, JOTWELL (September 17, 2018).

Advertisements

John Dean, Watergate and Loss Aversion

John Dean

John Dean

In the wake of Watergate, where so many of the culprits of the scandal were lawyers, the ABA responded by requiring law students in accredited schools to take at least one course in professional responsibility.  Then, as now, the thinking goes that immersing students in discussions about the profession’s rules and values will encourage more ethical behavior, including at the highest levels of government.  Those of us who advocate for a behavioral approach to legal ethics, however, have come to believe that teaching the profession’s rules and values, though central, are insufficient.  Rather, students also need to learn about how ethical decisions actually are made — that is, to learn about the situational variables, cognitive biases and heuristics that contribute to unethical behavior.

Recently, I was pleased to learn that John Dean — famed former White House Counsel whose riveting testimony about the Watergate cover-up during the Senate Watergate Hearings was a key part of the saga — also believes in a behavioral approach. In an illuminating deep dive into the Watergate years on The Josh Marshall Podcast, Dean reflects upon how, he now realizes, loss aversion explains much of his misconduct during his time in the Nixon White House. As Dean states (starting around 30:00 of the podcast; and as he has written elsewhere), during his active participation in the Watergate cover-up (for which ultimately he was sentenced to jail) he was experiencing a “loss frame,” which caused him to irrationally “double down” on his own misbehavior to prevent exposure and detection.  Only later on, Dean notes — when he started to cooperate with prosecutors — did his loss aversion abate.

John Dean’s explanation of his own behavior has an empirical basis.  A number of studies have demonstrated the perils of loss framing. For instance, in one set of experiments, researchers found that “decision makers engaged in more unethical behavior if a decision was presented in a loss frame than if the decision was presented in a gain frame.” Other studies (e.g., here and here) have concluded that cheating occurs more frequently to avoid a loss than to secure a gain. And, as one expert on loss aversion has noted, “a host of empirical and experimental studies have shown that tax compliance is higher when, following overwithholding, taxpayers expect a refund (a gain frame), than when, following under-withholding, they expect to pay additional sums (a loss frame).” Eyal Zamir, Law Psychology and Morality: The Role of Loss Aversion (2015) at 32.

As the new semester begins, with Watergate again in the news, many professional responsibility classes undoubtedly will be revisiting the lessons learned from the events of more than 40 years ago.  As these conversations take place, loss aversion and its role in producing unethical behavior can be — and I hope will be — an important part of the discussions.

Behavioral Science and the Duty to Report Misconduct, Pt. 2

A previous post promised updates on an interesting case, Joffe v. King & Spalding LLP, No. 1:17-cv-03392-VEC, S.D.N.Y (2018), which addresses a common law breach of contract claim arising from the duty to report lawyer misconduct. Two weeks ago, Judge Caproni denied defendant King & Spalding’s motion for summary judgment in the case, finding that there are “questions of fact regarding whether [plaintiff] reported or attempted to report ethical concerns and whether King & Spalding retaliated against him for doing so”  (Slip Op. at 22) (the decision is available behind various paywalls; reporting is available here and here). As a result, the case now moves forward to a potential trial.

One of the most interesting aspects of Judge Caproni’s decision is the legal standard it adopts for common law breach of contract under the controlling New York case, Wieder v. Skala, 89 N.Y.2d 628 (1992).  King & Spalding had argued for an “extremely narrow” Wieder test that would permit claims “only to law firm associates who are faced with plainly unethical conduct and therefore face a ‘Hobson’s choice’ between complying with their own obligation to report unethical conduct . . . and their job” (Slip Op. at 13). In other words, proof of breach of contract would require proof of a mandatory duty to report based on a clear violation of the ethical rules. Rejecting this standard, and borrowing from frameworks in other forms of retaliatory discharge under federal law, the court concluded that “a plaintiff establishes a prima facie case under Wieder by demonstrating that he reported, attempted to report, or threatened to report suspected unethical behavior and that he suffered an adverse employment action under circumstances giving rise to an inference of retaliation.” (Slip Op. at 14). Notably, this standard does not require proof of actual misconduct; rather, the plaintiff need only possess a “sincerely held, good faith belief that there had been an ethical violation.” (Slip. Op. at 15, n.11). Once the plaintiff satisfies the prima facie test, the burden shifts to the defendant to show that either the plaintiff did not act in good faith or that the adverse employment decision “was not connected to the attempted, threatened or actual report” of misconduct. The plaintiff will then bear the burden to show that the “purported non-retaliatory reasons are pretextual.” (Slip Op. at 14-15).

Where the case goes from here remains to be seen — we will provide developments as they arise.

(7/16/18 update:  Curious development — Plaintiff Joffe’s lawyers (from Javerbaum Wurgaft Hicks Kahn Wikstrom & Sinins, P.C.) moved to withdraw as counsel.  Much of the basis for the motion is redacted (presumably to protect client confidences), but what is unredacted indicates a dispute over payment of attorney’s fees and aspects of litigation strategy.  What happens next, we will see).

(9/13/18 update: On Tuesday, Judge Caproni issued an order granting the request of Joffe’s lawyers to withdraw, finding that there were “satisfactory reasons” for the motion (the order is available here).  Expert discovery will be delayed until late November to allow Joffe to secure new counsel. Whether the case will move forward or settles remains to be seen).

(10/8/18 update: On October 2, Magistrate Judge Aaron granted a charging lien against Joffe, the amount to be determined at the end of the litigation.  Opinion here).

(10/23/18 update:  On October 16, plaintiff Joffe filed a motion to vacate the charging lien).

Behavioral Science One Sheets

ESEthicalSystems.org keeps adding valuable resources to its top-notch website, including to its collection of “Behavioral Science One Sheets”  — short, well-written descriptions of some of the most important aspects of behavioral ethics. The most recent addition is “Motivated Reasoning,” which, as it states (and I agree), is “one of the most important topics” in explaining the behavioral science of ethical decision-making. Here’s a bit more from the One Sheet:
Motivated reasoning affects decision-making in all areas of our lives, but moral decisions are especially vulnerable. Moral decisions are often high-stakes decisions. They also tend to be especially complex, emotional, and intuitive. These characteristics provide the ideal conditions for motivated reasoning to take effect.
The One Sheets series — which now includes Bounded Ethicality, Ethical Fading, Nudging for Ethics, Speak Up Culture, Ethics Pays, Goals Gone Wild and Motivated Reasoning — can be found here.  Produced in conjunction with the Notre Dame Center for Ethical Leadership, the One Sheets are great resources for anyone wanting to get up to speed quickly on these important topics — and as handouts for an ethics class!

Scholarship Update on Stanley Milgram

50 Years of ‘Obedience to Authority’: From Blind Conformity to Engaged Followership, Annual Review of Law and Social Science, Vol. 13, pp. 59-78, 2017

Abstract

Despite being conducted half a century ago, Stanley Milgram’s studies of obedience to authority remain the most well-known, most controversial, and most important in social psychology. Yet in recent years, increased scrutiny has served to question the integrity of Milgram’s research reports, the validity of his explanation of the phenomena he reported, and the broader relevance of his research to processes of collective harm-doing. We review these debates and argue that the main problem with received understandings of Milgram’s work arises from seeing it as an exploration of obedience. Instead, we argue that it is better understood as providing insight into processes of engaged followership, in which people are prepared to harm others because they identify with their leaders’ cause and believe their actions to be virtuous. We review evidence that supports this analysis and shows that it explains the behavior not only of Milgram’s participants but also of his research assistants and of the textbook writers and teachers who continue to reproduce misleading accounts of his work.

Behavioral Legal Ethics and Accurate Science

In my article on teaching behavioral legal ethics, I noted that as teachers we have an obligation to remain atop of the science in the field to make sure that we impart the most accurate and up-to-date scientific understandings to our students. This duty has become all the more important given the debate over what has been called the “replication crisis” — that is, the extensive discussion in the field of psychology (as well as other sciences) about whether the effects in many studies have been overstated or, in some cases, are non-existent. A number of methodological questions have been raised, including whether researchers have engaged in what is referred to as “P-hacking” – that is, manipulation of data to produce effects. This provocative topic was recently discussed in the New York Times Magazine’s cover article, When The Revolution Came for Amy Cuddy.  Even Nobel Award winner Daniel Kahneman (author of Thinking Fast and Slow) has notably weighed in, stating in an open letter that a sub-field of social psychology known as social priming has become “the poster child of doubts about the integrity of psychological research.”  More recent questions about the replicability of psychology research have also emerged.

The challenge for the legal community – at least those of us who do not rely on our own empirical research – is to ensure that we teach our students accurate science. Yet, how does one know whether previously reported studies that have been called into question should still be taught, or what provisos should be provided to students as part of our instruction?

In my professional responsibility class, for instance, in past years I have discussed (on our class blog) money priming, relying on the considerable research that demonstrates that priming people with thoughts of money can increase anti-social behavior. I usually alert my students to these studies and ask them to consider how these results might impact their ethical choices as practicing attorneys, as well as the career choices they plan to make after graduation.

In the last few years, however, there has been a debate about whether the research on money priming is as dependable as has been claimed. One set of researchers, for example, was unable to replicate some of the most well-known studies in the field, leading to questions about whether money priming even occurs. A rejoinder, based on a 10-year review of experiments, posited alternative explanations for the failures of replication, concluding that the vast majority of studies in the field still demonstrate money priming effects.

Given these competing views about the research, what should one do? One approach would be to avoid the entire subject until the dust settles and a new consensus emerges.  I may take this approach next semester when I teach professional responsibility, as money priming is a relatively narrow topic that I teach as a small portion of my overall discussion of behavioral legal ethics. Or I may decide to engage my students in the debate, exposing them to the competing research claims and encouraging them to come to their own conclusions about how to consider the state of the science.

Either way, this example reinforces how essential it is to stay abreast of the science in the fields in which we teach. After all, behavioral legal ethics is only as stable as the science upon which it rests.

Scholarship Update: Blind Injustice

Blind InjusticeAs I am sure is true for many, I have large stacks of books waiting to be read that sit in piles on my office desk or home book stand. One pile, which I am working through — books about the role of behavioral science in criminal justice — just expanded with the publication of Mark Godsey’s Blind Injustice: A Former Prosecutor Exposes the Psychology and Politics of Wrongful Convictions.  The director of the Ohio Innocence Project, Professor Godsey‘s book  promises to be a tour de force about how a myriad of psychological factors — such a confirmation bias, cognitive dissonance and dehumanization, to name a few — can cause prosecutors to make horrible errors in judgment.  One chapter is dedicated to tunnel vision, a term well-known to those in the field of criminal justice, which too often causes prosecutors and law enforcement to focus narrowly on evidence of guilt without objectively assessing contrary evidence of innocence (I have written about tunnel vision with regard to the famous case of Jeffrey MacDonald, the army doctor who was convicted of the brutal slaying of his wife and two young daughters more than 40 years ago).

Peppered with examples from cases Professor Godsey has worked on, as well as others in the news, the book promises to be a valuable addition to the growing body of scholarship exploring the role of psychological biases in the process of criminal adjudication.

Other notable books on the subject include: Adam Benforado’s Unfair (2015), Dan Simon’s In Doubt: The Psychology of the Criminal Justice Process (2012), and Daniel Medwed’s Prosecution Complex (2012).  And, of course, there is a large and growing body of academic scholarship in law reviews, including this article by Keith Finley and Michael Scott and this one by Alafair Burke.