Many view algorithms as more objective tools than human judgement, because humans, especially in the criminal legal system, have a tendency towards bias. However, algorithms, including RATs, are not free from bias either.
Algorithms have a high risk of obscuring that bias under the name of “science” in these high-stakes decision-making systems.
A common aphorism about algorithms is “bias in, bias out.”1Aliya Ram: AI risks replicating tech’s ethnic minority bias across business, Financial Times If biased information goes into an algorithm, the result will generally reproduce that bias.
Algorithms on their own cannot fix ingrained social issues; they are simply tools that reflect what we put into them. Algorithms can only make predictions based on the information they are given.
If a particular algorithm is created using decades of racist criminal history data, then the outcomes it predicts will embed racial biases into their predictions. The factors pretrial RATs use to determine if someone is risky, from age to one’s history of arrests and convictions, are based in a history of bias in the criminal legal system.
Although many algorithms were designed to try to be more objective than people, they often reproduce and magnify the implicit and explicit biases of those who designed them.2Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner: Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica
Algorithms used in a variety of arenas, from the criminal legal system to health care, have caused harm to those involved, and many lawsuits have actually been brought against government use of algorithms due to these harms.3Rashida Richardson, Jason M. Schultz, and Vincent M. Southerland: Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems, AI Now Institute
Machine learning-based algorithms can especially amplify the biases of their input data, by taking an assumption, learning from it in a feedback loop, and running with it to increasingly troubling conclusions.4Ellora Tadaney Israni: When an Algorithm Helps Send You to Prison, The New York Times
This often happens because algorithms learn to make connections between variables that may be correlated, but do not actually have a causal relationship.5Karen Hao: AI is sending people to jail–and getting it wrong, MIT Technology Review The algorithm can’t tell the difference. So it learns to associate two things, such as poverty and recidivism or race and crime, and bakes the association into how they find information and produce results.
For instance, looking at past examples of famous scientists or previous hires at a tech organization might teach a computer that most scientists or engineers are male, so the computer will start to exclude women from the acceptable category of “scientist” or “engineer.”
Read this article6Jeffrey Dastin: Amazon scraps secret AI recruiting tool that showed bias against women, Reuters or see the video below for a simple explanation of bias in machine learning: