Pretrial risk assessment tools (RATs) can embed bias directly into their algorithmic components. They draw from a history of criminal legal system data that is biased in the context of race and ethnicity and class, as our criminal legal system targets poor people and people of color through higher rates of policing, more arrests, more convictions, and longer sentences.

We aim to ensure that communities understand how pretrial RATs are adopted in jurisdictions nationwide, and the variety of ways in which they can impact pretrial decision-making systems — and to make it clear to local communities that they can fight for an end to pretrial incarceration that doesn’t enshrine pretrial RATs, or at least puts major limits on RATs’ use.

Risk assessments are already used in jurisdictions all over the United States. Our organizations are pushing to end money bail and massively reduce pretrial incarceration, without the use of pretrial risk assessment algorithms. 

But in those jurisdictions where they are used, RATs should be used only to undo mass incarceration and reduce racial disparities in jail populations and in community supervision conditions. They need to be transparent and understandable to those who administer them and those who are assessed by them, and to the community impacted by them and their use.

The way RATs are designed currently does not ensure this outcome.1Partnership on AI: Report on Algorithmic Risk Assessment Tools in the US Criminal Justice System

Plus, RATs tend over-inflate risk in the way they label risk and make recommendations, and they are skewed to over-predict risk due to the data they are trained on.2Brandon Buskey and Andrea Woods: Making Sense of Pretrial Risk Assessments, The Champion

They also make pretrial “failure,” which is usually defined as not coming to court or being rearrested, appear far more likely and intentional than it actually is. And RATs measure group risk to try to predict individual risk, without conveying to judges and magistrates how rare these instances of rearrest or failing to appear in court really are.

As David Robinson and Logan Koepke assert, for most tools, “the majority of people who are labeled as being in the highest risk group will not be rearrested if released pretrial,” meaning that the “high risk” labeling is very misleading and focuses on the relatively low chances of “failure” instead of the more likely possibility of pretrial success.3David G. Robinson and Logan Koepke: Civil Rights and Pretrial Risk Assessments, Upturn Inc.

It is essential, therefore, to understand the input factors being used to predict risk — factors that may go on to deprive someone of pretrial freedom — and the ways that these factors are steeped in race and class bias.

Any system that relies on criminal justice data must contend with the vestiges of slavery, de jure and de facto segregation, racial discrimination, biased policing, and explicit and implicit bias, which are part and parcel of the criminal justice system. Otherwise, these automated tools will simply exacerbate, reproduce, and calcify the biases they are meant to correct.

-Vincent Southerland, “With AI and Criminal Justice, The Devil is in the Data”4Vincent Southerland: With AI and Criminal Justice, The Devil is in the Data, ACLU

Read our subsections to explore more about the impact of using algorithms fraught with bias in our Impacts of Biased Risk Assessments section and an exploration of how risk is defined and calculated in Risk and Fairness.

Examine the bias baked into common risk assessment factors in the Criminal Legal System Bias section and Demographic Bias section.