Sign In
  • InsideTrack
  • July 19, 2017

    The Loomis Case: The Use of Proprietary Algorithms at Sentencing

    A defendant challenged the use, at sentencing, of software that measures a defendant’s risk of recidivism. The case raises interesting questions about such algorithmic tools.

    Joe Forward

    Algorithm Sentencing

    July 19, 2017 – Four years ago, a La Crosse man received a maximum sentence for attempting to flee an officer and operating a vehicle without the owner’s consent, entering a guilty plea on both charges. The sentence included a six-year prison term.

    What irked the defendant, Eric Loomis - and what he eventually argued in a petition for review to the U.S. Supreme Court – was that the sentencing judge had used a private company’s proprietary software to help determine his fate.

    What the judge had used was the same risk and needs assessment tool that many jurisdictions within and without Wisconsin are using: a software program called Correctional Offender Management Profiling for Alternative Sanctions (COMPAS).

    Developed by Northpointe Inc., COMPAS is a “web-based tool designed to assess offenders’ criminogenic needs and risk of recidivism.”1 Through offender-related data, it focuses on “predictors” that are known to affect recidivism among similar groups.

    Joe ForwardJoe Forward, Saint Louis Univ. School of Law 2010, is a legal writer for the State Bar of Wisconsin, Madison. He can be reached by email or by phone at (608) 250-6161.

    The New York Times was following when the Wisconsin Supreme Court rejected Loomis’s challenge at the state level. “Sent to Prison by a Software Program’s Secret Algorithm,” was the headline. It was a good one, but not altogether accurate.

    The judge had not used the algorithm exclusively to decide whether Loomis would go to prison, as the headline suggests. But the case raises interesting questions about the use of software, especially proprietary software, to make sentencing decisions.

    High Risk

    Actuarial risk assessment instruments like COMPAS are a subset of a broader practice called evidence-based decision making (EBDM), the use of data to better inform decisions. EBDM is not unique to criminal justice – evidence-based medicine is a standard for decision-making about patients2 – and it’s not unique to Wisconsin.

    But Wisconsin is at the forefront of EBDM practice in the criminal justice system, and is currently partnering with the U.S. Department of Justice and the National Institute of Corrections on implementing EBDM tools at all stages of the criminal justice spectrum.

    The state court system began using COMPAS after the Wisconsin Department of Corrections adopted its use statewide in 2012. The tool helps inform risk probability for pretrial release, for instance, as well as the defendant’s programming needs. The tool is also used as part of presentence investigation (PSI) reports conducted by DOC.

    The algorithm, which relies on actuarial science, generates a risk score. “Just like the insurance industry, a COMPAS score is determined by comparing your offender’s characteristics to a representative criminal population,” Northpointe has explained.3

    “The result is that a COMPAS score tells you, relative to other offenders across the United States, the predicted risk of your person.” For instance, if an offender scores a 4 on a 10-point scale, 60 percent of the sample population is more risky in that category.

    In Loomis’s case, which stemmed from a drive-by shooting, the PSI report included a COMPAS assessment. It said Loomis presented a high risk of violence and a high risk to re-offend, based on certain predictors. The sentencing judge referenced the COMPAS assessment when ruling out probation and handing down the maximum sentence.

    Loomis argued, on due process grounds, that the proprietary nature of the software prevented him from challenging the scientific accuracy and validity of the factors used to return risk scores, including a possible impermissible sentencing factor: gender.

    “This Court long ago held that due process was violated when a sentence is founded, at least in part, upon significantly inaccurate information,” wrote attorney Michael Rosenberg in a motion brief to the U.S. Supreme Court, on behalf of Loomis.

    Cecelia Klingele photo

    U.W. Law Professor Cecelia Klingele

    The U.S. Supreme Court recently denied review of the case – which keeps COMPAS intact under the state supreme court decision Loomis was asking the court to review.

    Making Comparisons

    As the case was pending, the issue was already on the national radar. A ProPublica piece titled “Machine Bias,” for instance, noted that courts nationwide were using sentencing algorithms and argued that they can inject bias against African-Americans.

    The article, which the Wisconsin Supreme Court noted in the Loomis decision, made an interesting point: “If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long. The trick, of course, is to make sure the computer gets it right.” Part of Loomis’s argument was that he could not challenge the data.

    COMPAS generates a scaled risk score, representing the likelihood that those with a similar history of offending are more or less likely to commit another crime.

    “These tools make statistical aggregate predictions about the behavior of large numbers of people who share some characteristics with the defendant,” said U.W. Law Professor Cecelia Klingele. “But it doesn’t tell you what the defendant is going to do.”

    “They tell you what a group of people who are like the defendant in some ways are likely to do as a statistical matter, what the risks or odds are that the defendant will be detected in committing a crime and being prosecuted and convicted,” she said.

    Klingele recently wrote a paper, “The Promises and Perils of Evidence-Based Corrections,”4 which the Wisconsin Supreme Court cited in Loomis. She notes that such tools can help judges check unconscious bias, but there are potential misuses.

    “We know that on average, these algorithmic tools tend to make better risk predictions than people do. People tend to overestimate risk,” Klingele said. “And so these can be a nice check against those natural intuitive biases that we have.

    “But the purposes of sentencing are much broader than predicting future risk of reconviction,” she noted. “When we just look at the risk tool itself, there are a lot of potential misuses that flow primarily from misunderstanding what it does.”

    She said judges consider the seriousness of the offense, the background and personal characteristics of the defendant, the need the individual may have for intervention in order to avoid future conviction, community values and fundamentally, justice.

    “Sometimes a risk assessment result will be in tension with some of the other factors that judges are required to consider,” Klingele said.” We always have to remember that the job of a judge at sentencing is to consider many factors other than risk.”

    Klingele noted that the tools don’t predict what kind of crime the person may or may not commit. “There’s a big difference between a minor shoplifting and a major physical assault. We can’t predict what people will do. The future is unknowable in that way.”

    In addition, Klingele said many individuals, especially academics in the field, have concerns about any tool that is proprietary because it limits the ability to assess the driving factors behind the resulting recommendation.

    In 2014, then U.S. Attorney General Eric Holder also provided cautionary words: “Although these measures were crafted with the best of intentions, I am concerned that they may inadvertently undermine our efforts to ensure individualized and equal justice,” Holder told the National Association of Criminal Defense Lawyers.

    “By basing sentencing decisions on static factors and immutable characteristics – like the defendant’s education level, socioeconomic background, or neighborhood – they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society,” Holder said.

    For instance, COMPAS makes recidivism predictions through data comparisons with similar population groups, but judges (and defendants) aren’t privy to the populations used to make the prediction, as one expert pointed out at a postconviction hearing.

    “If we don’t know what’s feeding the prediction, we can’t examine whether it’s accurate, whether it’s fair, or whether it’s being utilized properly,” Klingele said.

    Wisconsin Supreme Court Decision Intact

    When Loomis took his case to the Wisconsin Supreme Court asking for a new sentence, the court unanimously ruled against him. But the justices were clear: COMPAS cannot be relied on exclusively, and courts must proceed with caution.

    “[W]e conclude that if used properly, observing the limitations and cautions set forth herein, a circuit court’s consideration of a COMPAS risk assessment at sentencing does not violate a defendant’s right to due process,” wrote Justice Ann Walsh Bradley.

    “[B]ecause the circuit court explained that its consideration of the COMPAS risk scores was supported by other independent factors, its use was not determinative in deciding whether Loomis could be supervised safely and effectively in the community.”

    Loomis argued that the judge improperly relied on the COMPAS score, and used an expert witness to argue that COMPAS should not be used at all for incarceration decisions because too little is known about how the risks are analyzed.

    “The Court does not know how the COMPAS compares that individual’s history with the population that it’s comparing them with,” testified the expert witness during a postconviction hearing. “There’s all kinds of information that the court doesn’t have. …”

    But the supreme court noted that the COMPAS assessment merely supplemented the sentencing judge’s decision, who indicated he “would have imposed the same sentence regardless of whether it considered the COMPAS risk scores.”

    And the supreme court mandated that PSI reports containing COMPAS risk assessments must make certain disclosures to sentencing courts, explaining that:

     

    • “1) the proprietary nature of COMPAS has been invoked to prevent disclosure of information relating to how factors are weighed or how risk scores are to be determined;
    • 2) risk assessment compares defendants to a national sample, but no cross-validation study for a Wisconsin population has yet been completed;
    • 3) some studies of COMPAS risk assessment scores have raised questions about whether they disproportionately classify minority offenders as having a higher risk of recidivism; and
    • 4) risk assessment tools must be constantly monitored and re-normed for accuracy due to changing populations and subpopulations.”

    Providing this information, Justice Bradley wrote, “will enable courts to better assess the accuracy of the assessment and the appropriate weight to be given to the risk score.”5

    Conclusion

    In Loomis, the sentencing judge did not rely too heavily on the COMPAS scores. And that’s an important point. What happens, in the future, if a sentencing judge places greater weight on a COMPAS score in making the sentencing decision?

    Chief Justice Patience Roggensack, in a concurring opinion in Loomis, clarified that judges can “consider” COMPAS but cannot “rely” on it in making sentencing decisions.

    But what is the line between consideration and reliance? And what happens when these tools get better? What happens, as the New York Times headline suggests – “Sent to Prison by a Software Program’s Secret Algorithm” – if some company develops an algorithm that purports to make accurate sentencing decisions in place of a judge?

    “It would be a dark future if computer algorithms ever replaced a judge’s sentencing decision,” Klingele said. And, it is highly unlikely that it could happen for various reasons, including the role of judges as discretionary gatekeepers.

    “I can’t imagine that a risk tool alone could produce just verdicts,” said Klingele. “The judicial function can’t be outsourced to a math problem.”

    Endnotes

    1 Practitioner's Guide to COMPAS, Northpointe Inc. (Aug. 17, 2012).

    2 91 Notre Dame L. Rev. 537, 553 (December 2015).

    3 Supra note 1, at 5.

    4 Id.

    5 State v. Loomis, 2016 WI 68, ¶ 66, 371 Wis. 2d 235, 264, 881 N.W.2d 749, 764.


Join the conversation! Log in to comment.

News & Pubs Search

-
Format: MM/DD/YYYY