What algorithmic risk assessment has done is reveal the inequality inherent in all prediction, forcing us to confront a much larger problem than the challenges of a new technology. Algorithms shed new light on an old problem.

Sandra Mayson, Bias In, Bias Out1Sandra Mayson: Bias In, Bias Out, Yale Law Journal

Using data that comes from a biased system will likely create an output shaped by the system’s bias.

There are many examples of algorithms being trained on existing biased data and generating problematic results.2Michael Li: Addressing the Biases Plaguing Algorithms, Harvard Business Review

When an algorithm is trained on data pulled from a clear history of discriminating against particular populations, it will reproduce the same patterns of discrimination. For example, Amazon famously accidentally discriminated against women,3Ali Ingersoll: How the Algorithms Running Your Life are Biased, The Washington Post when it developed an algorithm to look through resumes in a field that has historically been dominated by men: the algorithm learned to weed out female applicants.

RATs, especially in the criminal legal field, make or inform decisions that have major consequences for accused or convicted individuals. There are already many RATs in sentencing as well as probation and parole, along with the pretrial RATs we focus on here.

Some argue that RATs add an essential objective, evidence-based element4Anne Milgram: Why smart statistics are the key to fighting crime, TED Talk to processes like sentencing – “objectivity” that helps adjust biases in the gravity and length of sentencing decisions.

However, as one scholar says, “The use of some types of risk assessment can elevate sentences on the basis of acuteness of disadvantage, reinforce/mask racial and gender disparity, produce false positives, and lead to less transparent decisions.”5Kelly Hannah-Moffat: The Uncertainties of Risk Assessment: Partiality, Transparency, and Just Decisions, University of Toronto

One study of RAT use in sentencing found that when judges used the results of a RAT to make decisions, the likelihood of incarceration for wealthy defendants decreased, but increased for poorer defendants. The researchers concluded that “under some circumstances, risk assessment information can increase sentencing disparities.”6Jennifer L. Skeem, Nicholas Scurich, and John Monahan: Impact of Risk Assessment on Judges’ Fairness in Sentencing Relatively Poor Defendants, Virginia Public Law and Legal Theory Research Paper No. 2019-02

Predictive policing is another example that relies on algorithms to decide where crime is supposed to be most likely to happen or who might supposedly be involved.7Privacy SOS: What’s Predictive Policing, ACLU Massachusetts

The inputs, however, are clearly biased. These tools rely on information from police databases, which are themselves biased, and can lead to “more aggressive enforcement in communities that are already heavily policed” and spark a “cycle of distorted enforcement” in these communities.8David Robinson and Logan Koepke: Stuck in a Pattern: Early evidence on “predictive policing” and civil rights, Upturn

For instance, there is a history of Black and Latinx individuals being arrested far more often9ACLU: Marijuana Arrests by the Numbers for marijuana offenses than white individuals, despite similar rates of marijuana usage.

In Newark, New Jersey, the Justice Department found that despite Black individuals only making up 54% of the city population, they made up 85% of pedestrian stops and 79% of arrests.10Nicole Flatow: At least 3/4 of Newark Pedestrian Police Stops had No Constitutional Basis, Justice Department Finds, Think Progress

When all this arrest data is input into a predictive policing algorithm, this leads to increased surveillance and patrolling in neighborhoods of color and poor neighborhoods, which leads to higher arrests and criminalization. This makes the algorithm prediction appear correct, indicating that these areas do have more crime, resulting in higher surveillance, and the cycle repeats.

Predictive systems informed by biased data “cannot escape the legacies of the unlawful or biased policing practices that they are built on.” – Rashida Richardson, Jason Schultz, Kate Crawford11Rashida Richardson, Jason M. Schultz, and Vincent M. Southerland: Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems AI Now Institute

Further, as with risk assessment tools in the pretrial space, jurisdictions may be relying upon predictive policing tools too heavily, without fully understanding them or ensuring transparency and accountability of their use.12David Robinson and Logan Koepke: Stuck in a Pattern: Early evidence on “predictive policing” and civil rights, Upturn

Technology such as RATs is not inherently biased or inherently unbiased – they are tools that amplify the existing bias in our society. Because our human systems and human data are biased, the tools embed bias as well.