What are Pretrial Risk Assessment Tools?
Pretrial risk assessment tools, or pretrial RATs, are algorithms that predict certain outcomes for people accused of a crime, in order to help judges and magistrates make pretrial decisions.
Most often, pretrial RATs use information about someone accused of a crime to produce a risk score or scores that try to predict that person’s likelihood of failing to appear in court (FTA) and/or being re-arrested, often called new criminal activity (NCA).
RATs use re-arrest and failure to appear in court as proxies for measuring pretrial “failure” and “dangerousness.” In other words, these tools try to predict who is likely to show up for court and who will be arrested for a new charge if they are released before their trial, without taking into account a person’s individual circumstances of why they might not be able to come back to court or why the police might arrest them again. See our section on bias to learn more.
RATs inform judges at an essential moment in the criminal legal system process, and they influence release and supervision decisions. Whether or not someone gets to come home before their trial can shape the ultimate outcome of their case; being detained can lead to many negative impacts, including a higher chance of conviction.1Emily Leslie and Nolan G. Pope: The Unintended Impact of Pretrial Detention on Case Outcomes: Evidence from New York City Arraignments, Journal of Law and Economics
There are many different RATs used across the country in pretrial decision-making. Each state has its own laws and regulations governing their jurisdictions’ pretrial decision-making. Some states and counties directly require or permit risk assessment tools as a part of their pretrial processes, while other states and counties use tools without being mandated to do so by law.
Some states and counties are mandated to use tools because of court rulings, and other states and counties are piloting tools as part of potential statewide expansions. Some states have one standardized tool in use for all their jurisdictions, while other states allow counties and cities to choose which tool, if any, they will use.
How do RATs work?
Pretrial RATs input data points, assign points to each factor, and generate a predicted risk score or scores. These scores are meant to guess an accused person’s likelihood of failing to appear in court, getting arrested again, and/or getting arrested again with a charge of violence.
The various pretrial RATs collect different information in different ways from accused people or their histories with the criminal legal system, in order to determine their overall risk score or scores.
Tools also differ in where they source their training data or information, what data points they choose, how many records they gather, and how they weigh each factor.
The information collected can include both demographic and criminal legal system-related factors. Demographic data points commonly include age, substance use, and access (or lack of access) to stable housing or stable employment. Criminal legal system history data points include factors associated with prior convictions or arrests and charges, time spent incarcerated, and prior failures to appear in court.
Developers of RATs determine which factors, such as age or prior arrests, are statistically linked to the pretrial “failure”2Kristin Bechtel, Christopher Lowenkamp, and Alex Holsinger: Identifying the Predictors of Pretrial Failure: A Meta-Analysis outcomes they want their tools to predict.
Developers train their tools on existing data3Matt Henry: Risk Assessment: Explained, The Appeal about pretrial outcomes in real cases to determine which factors are most predictive for their RAT and how much these factors should weigh. They look for common factors amongst people who failed to appear in court or were re-arrested, such as their age or how many times they have been arrested, and analyze which of these factors correlate the most with the outcomes of failing to appear and rearrest.
Developers may use thousands or even millions of real life records to gather data points and look for these correlations, which often suggest that youth or many prior arrests mean someone is more “risky.”
No matter how many records they analyze, the dataset used to train the RAT algorithm might not apply to local communities that end up adopting the tool. Even more importantly, the data that go into the RAT can reflect the real-life racial biases that went into collecting that data.4Hayley Tsukayama and Jamie Williams: If A Pre-Trial Risk Assessment Tool Does Not Satisfy These Criteria, It Needs to Stay Out of the Courtroom, Electronic Frontier Foundation
Through interviews pretrial services officers conduct with accused people, or in some cases, by pulling static data points from criminal legal databases, the RAT pulls data points into its algorithm.
Some tools, such as the Virginia Pretrial Risk Assessment Instrument (VPRAI), require a pretrial officer or other court official to conduct an interview with the accused person to gather information to feed into the risk assessment. Others, like the Public Safety Assessment (PSA), do not need an interview, and can gather all the information the tool needs through static data collected inside criminal legal system databases.
Some jurisdictions may interview a relative or friend as well, depending on capacity, to verify information that the accused person gives about themselves. For instance, the Minnesota MNPAT guide5Minnesota Judicial Branch: Completing the Pretrial Release Evaluation Form and Assessment Tool, Minnesota State Court Administrator’s Office – see p. 4 recommends verifying information through a call with a “collateral contact” that the accused person provides. This is generally not a required step but serves as an optional way to gather and check information.
The RAT then weighs the input factors and produces either one numerical score predicting failure to appear and/or new arrest, or two scores: one for failure to appear and one for new arrest. Some RATs also have a flag for predicting new arrest with a violent charge.
Most pretrial RATs then divide these scores into levels, deciding which scores will fall into the “low”, “medium”, and “high” categories. These categories are meant to translate the numerical scores into risk designations.
One important thing to note here: Jurisdictions and tool designers determine where to make these cuts between risk designations based on their ideas of what kinds of “risks” are possible to tolerate in the community. Those decisions are policy-based — they are not scientific.
Further, these key decisions that split one risk level from another are not usually made through a public process, and the labels tend to vastly overstate risk.6David G. Robinson and Logan Koepke: Civil Rights and Pretrial Risk Assessments, Upturn Inc.
The scores from pretrial RATs often recommend particular pretrial release or incarceration decisions to judges or magistrates. These recommendations could include pretrial incarceration, levels of pretrial supervision, or simple release.
These particular pretrial recommendations may arise from a decision-making framework (or DMF). The decision-making framework is a set of policy recommendations that courts, legislators, and other decision-makers have set to decide what kinds of incarceration and community supervision will be recommended after a risk level is assigned. The DMF interprets scores and creates recommendations for levels of supervision for released people — or to not release an accused person at all.
Someone who gets a high risk score might be more likely to be detained pretrial or released with onerous supervision conditions or set a high bail. Someone who gets a low risk score might be released on their own recognizance.
Local and state laws impact how pretrial RAT scores are used during the pretrial process.
While we didn’t find any jurisdiction where the risk assessment formally took the place of a judge or magistrate making a pretrial decision about the release or incarceration of an accused person, RATs do often guide judges’ and magistrates’ decisions. In some jurisdictions, judges or magistrates must refer to them as part of their decision-making.
RAT accuracy is usually evaluated using a statistical process called validation. Validation produces a score that is meant to represent if the tool can correctly predict whether or not someone will fail to appear in court or be re-arrested.