Sentenced by an algorithm — Bias and lack of accuracy in risk-assessment software in the United States criminal justice system

Author Willem Gravett

ISSN: 1996-2118
Affiliations: BLC LLB (UP) LLM (Notre Dame) LLD (UP), Associate Professor in the Department of Procedural Law, University of Pretoria, Member of the New York State Bar
Source: South African Journal of Criminal Justice, Volume 34 Issue 1, p. 31 – 54
https://doi.org/10.47348/SACJ/v34/i1a2

Abstract

Developments in artificial intelligence and machine learning have caused governments to start outsourcing authority in performing public functions to machines. Indeed, algorithmic decision-making is becoming ubiquitous, from assigning credit scores to people, to identifying the best candidates for an employment position, to ranking applicants for admission to university. Apart from the broader social, ethical and legal considerations, controversies have arisen regarding the inaccuracy of AI systems and their bias against vulnerable populations. The growing use of automated risk-assessment software in criminal sentencing is a cause for both optimism and scepticism. While these tools could potentially increase sentencing accuracy and reduce the risk of human error and bias by providing evidence-based reasons in place of ‘ad-hoc’ decisions by human beings beset with cognitive and implicit biases, they also have the potential to reinforce and exacerbate existing biases, and to undermine certain of the basic constitutional guarantees embedded in the justice system. A 2016 decision in the United States, S v Loomis, exemplifies the threat that the unchecked and unrestrained outsourcing of public power to AI systems might undermine human rights and the rule of law.