Category Archives: Uncategorized

Repeated Misconduct and “Unethical Amnesia”

Those of us who study and write about ethics often wonder why human beings repeat unethical behaviors and fail to learn from their mistakes. Recently, researchers from Northwestern and Harvard have grappled with that question and believe that they may have an answer. Maryam Kouchakia and Francesca Gino conducted a study on cheating and found evidence that people suffer from “Unethical Amnesia,” the tendency to forget past, unethical behavior.  Kouchakia and Gino hypothesize that the psychological discomfort that individuals experience when they cheat leads them to obfuscate memories of their ethically questionable actions. As a result, these individuals fail to learn from the past, and are more likely to repeat bad acts.

More information about the research can be found here: Unethical Amnesia

Advertisement

New ABA Video — “Hidden Injustice: Toward a Better Defense”

2016-08-23_1426

As I’ve noted previously, research on implicit bias has taken hold at the highest levels of government, with the U.S. Department of Justice requiring training on implicit bias for all of its employees.

Criminal defense lawyers, of course, are also prone to implicit bias, as Professor L. Song Richardson has written in her excellent article in the Yale Law Journal. Now she and other experts discuss implicit bias and criminal defense in a new video, produced by the ABA, which is available here. It is an excellent introduction to the subject, and can be quite useful in classroom discussions (I plan to use it in my criminal defense ethics class this semester).

(The research basis for implicit bias also corresponds with the reasons why lawyers for indigent defendants can suffer from what I call “ethical blindness,” as I have written elsewhere).

Happy viewing!

BLE and the Practicing Lawyer

Having recently returned from International Legal Ethics Conference VII, I was happy to see so much interest in the emerging field of Behavioral Legal Ethics (BLE).  The two BLE panels on which I participated were well attended.  Other panels also included discussions of BLE, including a fascinating discussion of how behavioral science is making its way into the education of South African lawyers.

I am also heartened to see that the field is expanding to include important discussions among legal practitioners.  For instance, Catherine O’Grady and I have produced this online CLE program with the Practising Law Institute that has been viewed by more than 800 lawyers (registration fee required).  As another example, I just came across this article in a recent edition of the Oregon State Bar Bulletin (the magazine for Oregon’s practicing lawyers) that lays out some of the fundamentals of BLE (citing many leading BLE scholars such as Jean Sternlight & Jennifer Robbennolt, Robert Prentice and Catherine O’Grady).  How great that BLE has started to take hold with those who need it most — lawyers who regularly struggle with the ethical dilemmas that arise in practice.

For anyone interested in BLE, it is an exciting time indeed!

The Supreme Court’s Intuition

I’ve noticed over the years that, at least with regard to judicial disqualification, the Supreme Court has a penchant for making interesting assertions about human psychology, but then failing to provide an empirical basis for its claims — a matter I discuss in more detail with regard to the Court’s recent decision in Pennsylvania v. Williams in a new blog post on the New England Law faculty website, On Remand.

Update: 06/23/16:  Others have written more extensively about the role of unconscious bias with regard to judicial recusal and disqualification.  For some of the scholarship in this area, see:

Debra Lyn Bassett, Three Reasons Why the Challenged Judge Should Not Rule on A Judicial Recusal Motion, 18 N.Y.U. J. Legis. & Pub. Pol’y 659 (2015)

 Melinda A. Marbes, Reshaping Recusal Procedures: Eliminating Decisionmaker Bias and Promoting Public Confidence, 49 Val. U. L. Rev. 807 (2015)

 Melinda A. Marbes, Refocusing Recusals: How the Bias Blind Spot Affects Disqualification Disputes and Should Reshape Recusal Reform, 32 St. Louis U. Pub. L. Rev. 235 (2013)

 Debra Lyn Bassett & Rex R. Perschbacher, The Elusive Goal of Impartiality, 97 Iowa L. Rev. 181 (2011)

Ethics By Design

2016-06-14_1121

Many of the leading researchers and scholars in the area of behavioral ethics and systems design gathered last week at a conference held by EthicalSystems.org.  Entitled “Ethics By Design,” the conference focused on business ethics, but much of what was discussed has direct applicability to the world of Behavioral Legal Ethics. Luckily for those of us who were not in attendance, videos of the conference presentations are now available here.  Thanks EthicalSystems.org for your leadership and work in the field!

Update: 6/17/16:  For those not familiar with the work of EthicalSystems.org or its approach, the introduction to the conference by the organization’s founder, Jonathan Haidt, is a great introduction:

 

International Legal Ethics Conference VII (ILEC)

ILEC VII

I’m happy to report that we will have two panel discussions dedicated to Behavioral Legal Ethics at the upcoming ILEC VII Conference at Fordham Law School in New York City, July 14-16, 2016.  For those who may not know, ILEC takes place every two years and is attended by legal ethics scholars, practitioners and researchers from across the globe. This year’s conference, entitled The Ethics & Regulation of Lawyers Worldwide: Comparative and Interdisciplinary Perspectives, expects to be an exciting mix of discussions on a wide range of topics.

The first panel, “Recent Developments and Future Directions in the Study of Behavioral Legal Ethics,” will feature Professor and Associate Dean Catherine Gage O’Grady, University of Arizona James E. Rogers College of Law; Professor Jane Moriarty, Duquesne University School of Law; and Professor (and BLE co-founder) Molly Wilson, PhD, St. Louis University School of Law. Professor and Associate Dean Alice Woolley from the University of Calgary School of Law will moderate.  Here is the description:

This will explore recent developments and future directions in the study of behavioral legal ethics and its application to the professional practice of law. Panelists will discuss how unconscious biases, self-deception, and situational dynamics impact lawyers’ ethical decision making, behavior, and professional identity. In addition, the panel will explore the unique ways that behavioral legal ethics principles impact new lawyers, examine the way ethical cultures and infrastructures differ between different legal practice settings and different jurisdictions, and suggest how insights from behavioral science can be harnessed to promote ethical professionalism in a wide variety of practice settings.

The second panel, “Experiential Approaches to Teaching Behavioral Legal Ethics,” will feature Professor Vivien Holmes, Australian National University College of Law; Professor (and BLE co-founder) James Milles, University of Buffalo School of Law; and me. Professor Julian Webb from University of Melbourne Law School will moderate. Here is the description:

This panel will focus on experiential approaches to teaching behavioral legal ethics. These include simulation exercises that place students in role where situational influences that affect behavior can be explored and the use of multimedia and online resources to engage students in the foundations of behavioral science. In keeping with this approach, the panel will not only report on the latest empirical and educational research, but will involve audience members in interactive exercises so as to illuminate core findings of behavioral research.

An election detour . . .

In this heated election season, let’s all remember the power of confirmation bias.  Thank you Calvin and Hobbes:

CiQtlfvU4AI8ChN

Batson: Pretext? Or Implicit Bias?

Most of us probably recall Batson, the case in which the Supreme Court held that a criminal defendant could challenge his conviction if he or she could convince a judge that jurors of a protected class were intentionally excluded because of their membership in that class. Recently, the Missouri Court of Appeals for the Eastern District published an opinion in the case of Missouri v. Rashad in which the issue was whether the prosecutor’s dismissal of two African American jurors was pretextual.  This case was interesting because the prosecutor admitted that, in the case of one African-American juror, he had made a mistake in not dismissing a similarly situated Caucasian juror.  The question for the court was whether a prosecutor’s oversight is an excuse for differential treatment.  The court answered that it was, seemingly because Batson requires intentional exclusion.

What is truly notable about this opinion is the concurrence, written by Chief Judge Lisa Van Amburg, which takes issue with Batson because it does not address implicit bias. In its simplest form, implicit bias is the unconscious tendency most people have to favor particular groups of people and disfavor others.  The Implicit Association Test (IAT) is a test in which people are asked to pair items, and their reaction time is measured.  There are a number of different versions of this test, and each measures a different type of bias.  In one, people pair black and white faces with positive or negative words.  Most people are faster when they are pairing white faces with positive words than when they are pairing black faces with positive words.  Judge Van Amburg’s point in her concurrence is that when an attorney makes an “honest mistake” by dismissing a black juror, but not a similarly situated white juror, he may well be exhibiting implicit bias.  Moreover, it is likely that this type of implicit bias occurs more broadly in the selection of jurors, and in a variety of other areas in the criminal justice system.

The opinion and concurrence can be read here: Missouri v. Rashad

Lawyers Behaving Badly

60_minutes I have watched the 60 Minutes story, Anonymous, Inc. (link here), only once and have just skimmed the report by Global Witness, so let me start by saying that these are very preliminary assessments – but on first blush, what I have seen is quite disturbing: a dozen lawyers from different firms, when presented with the opportunity for a significant fee, provided preliminary advice on how to help a potential client “scrub” dirty money by explaining how to structure transactions to hide the source of the funds (in contrast, a thirteenth lawyer who was approached refused to provide any advice or assistance to the potential client). There are a wide number of ethical questions raised: do the prescriptions of Model Rule 1.2(d) apply to prospective clients?; to what extent do the lawyers in the video “know” that the prospective client has obtained the funds through crime or fraud?; what obligation exists to perform due diligence to determine if the funds are the result of crime or fraud, etc.? (Two prominent ethics experts, who provided an opinion about the conduct of the lawyers involved in the story, have expanded on their views here).

There are also some very interesting behavioral questions: why would these lawyers, who presumably are aware of the ethical prohibitions against assisting fraud and criminal conduct, seemingly skate toward (or over) the edge of permissible conduct so easily? Merely out of greed and avarice or because of powerful behavioral factors, such as partisan bias, where advancing a (potential) client’s interest trumps the formal rules prohibiting such conduct? Is cognitive dissonance at play: once the lawyer starts to provide advice on how to structure a transaction to protect anonymity is there a need to rationalize the behavior as consistent with the ethical rules? Is motivated reasoning afoot – allowing the lawyers to convince themselves that the rules are ambiguous enough that the advice (and the potential hefty fee) is permissible? How about moral disengagement — after all, the misbehavior that produced the dirty money happened far away, across the globe, with no identifiable (or at least immediately salient) victims? Per this last point, is this an example of ethical fading, where the business aspects of the decision (how to provide the potential client with technical advice on how to hide the source of the funds) crowded out the ethical considerations involved? These are just some of the questions that jumped out at me as I watched this disturbing video.

I will spend more time thinking about these issues, posting more as I delve deeper.

(Disclosure: The CEO of Global Witness, which led this undercover investigation, and I worked together for many years and we are friends).

The Role of Statistical Risk Prediction in Criminal Sentencing

This posting is a follow-up to my earlier post, A New Era in Criminal Sentencing and Incarceration, which described the growing unease over the high rates of incarceration in the U.S.  Prison overcrowding and the disparate impact of incarceration on certain populations and communities are topics with important ethical implications.  Mandatory minimums and “three strikes” sentencing schemes have been controversial since their inception.  In the face of pressure to address the large numbers of prisoners and mounting concern over the psychological, social, and financial consequences of long-term incarceration on individuals and communities, legislatures seem poised to act.   However, replacing current sentencing schemes is no easy task; both individualized and standardized approaches are fraught with difficulties.

For more than a decade—since before the time when I was his student at UVA Law—John Monahan has been developing risk assessment models to predict the future behavior of offenders.  This work, supported by the National Institutes of Health and MacArthur Foundation, has resulted in many articles (and a book).  Monahan’s latest article, with frequent coauthor, Jennifer Skeem, is titled Risk Assessment in Criminal Sentencing.  It can be found here:  Risk Assessment in Criminal Sentencing

The use of demographic and other information, along with statistics to determine future behavior is quite different from the more traditional “gut intuition” approach.  Monahan has noted that the two methods were described in Paul Meehl’s 1954 article, Clinical Versus Statistical Prediction (1954):

[…there are ]two ways of forecasting behavior. One, a formal method, uses an equation, a formula, a graph, or an actuarial table to arrive at a probability, or expected value, of some outcome; the other method relies on an informal, “in the head,” impressionistic, subjective conclusion, reached… by a human clinical judge.

[From John Monahan, Violence Risk Assessment: Scientific Validity and Evidentiary Admissibility, 57 Wash. & Lee L. Rev. 901 (2000):  Violence Risk Assessment  ]

The Federal Sentencing Guidelines (FSG) can be viewed as an attempt to strike a balance between the human decision-maker and the actuarial design. The guidelines are equitable in that they establish consistency across jurisdictions among like offenders, but they fail to take some of the more individualistic information (such as marital status and education level) into account.   Although the FSG address several problems with the judicial discretion model, critics have pointed out the guidelines achieve uniformity at the expense of fairness, in that factors indicating the appropriateness of a lighter sentence are ignored.

Consideration of factors that have been identified and demonstrated to be reliable predictors of future criminal behavior may strike the perfect balance.  On the one hand, this approach respects the individuality of each offender, because it takes into consideration that offender’s characteristics and background.  On the other hand, statistics-based risk assessment decisions would, at least in some senses, treat similarly situated defendants similarly.

Of course, predictions of future violence have been a central part of the civil commitment process for years.  Mental health professionals have been tasked with examining individuals and making assessments about future violence prediction.  The advantage of a pure data-driven approach is increased accuracy and cognitive errors.  In the criminal sentencing context, an actuarial approach lessens the impact of the personal preferences and biases of judge and jury.  Ultimately, the argument goes, risk prediction models are likely to result in better accuracy, and therefore more “correct” outcomes.

Although some factors related to risk of reoffending are common in sentencing schemes, of late, primary or wholesale reliance on statistical prediction of future risk has not been.  That may be changing.  Courts in a number of jurisdictions appear to be incorporating risk prediction into sentencing decisions.  At least one state is formally considering the move. Several months ago, news outlet fivethirtyeight reported that Pennsylvania would be the first state to experiment with a new sentencing practice, based upon risk assessment.  According to the report:

Pennsylvania is about to take a step most states have until now resisted for adult defendants: using risk assessment in sentencing itself. A state commission is putting the finishing touches on a plan that, if implemented as expected, could allow some offenders considered low risk to get shorter prison sentences than they would otherwise or avoid incarceration entirely. Those deemed high risk could spend more time behind bars. [For more, see: Prison Reform Risk Assessment]

Risk assessment tools could help Pennsylvania effectively distinguish between the offenders who are most likely to pose a risk to society and those who would benefit from rehabilitation, substance abuse or other treatment, or a simple second chance.  The new method may allow a state to address prison overcrowding in an effective and humane way, and it could result in less crime down the road.  However, a bevy of ethical issues arise as well.  As fivethirtyeight notes,

The risk assessment trend is controversial. Critics have raised numerous questions: Is it fair to make decisions in an individual case based on what similar offenders have done in the past? Is it acceptable to use characteristics that might be associated with race or socioeconomic status, such as the criminal record of a person’s parents? And even if states can resolve such philosophical questions, there are also practical ones: What to do about unreliable data? Which of the many available tools — some of them licensed by for-profit companies — should policymakers choose?  Prison Reform Risk Assessment

As states and the federal government address the thorny issue of how to eliminate prison overcrowding and lower incarceration rates, actuarial data derived from social science will likely play a major role.  Time will tell whether these new risk assessment tools can be implemented in a way that is ethical and effective.