In the era of Artificial Intelligence (AI) it is key that we reinvent how we think about criminal offending, policing, public safety and sentencing; enhancing the authenticity and diversity of automation in driving ethical and evidence-based crime prevention. These were the sentiments of New York-based criminologist and international Criminal Justice Consultant who specializes in AI, Ms Renée Cummings.
Speaking at a public lecture on AI citizenship, Ms Cumming’s explored the idea that criminal justice should be the conscience of AI and examined the risky business of outsourcing decision making to algorithms as she advocates for accountable, diverse and ethical technology and equitable and inclusive AI as critical to digital identity development and citizenship in the 4IR. The public lecture with the theme ‘Diversity, Digital Identity and AI Citizenship in the Fourth Industrial Revolution was hosted by the University’s Faculty of Engineering and the Built environment (FEBE) in association with the Faculty of Law on Friday, 04 October 2019.
She pointed out that using data from racially biased criminal justice systems could lead to measurable biases in both risk scores and outcome measures. “A lack of accountability and transparency alongside potential human rights challenges of algorithms such as facial recognition, predictive crime mapping, and mobile phone data extraction being developed by the police, prisons and border forces,” she said.
“Because algorithms ‘encode assumptions and systematic patterns’ they can reinforce and then embed discriminations. Despite the atrocities of the automated ‘black box’ data calculation process; criminal justice algorithms should be open-source and go through an impact assessment.”
Ms Cummings concluded: “As we build AI into criminal justice without fully addressing (or even understanding) its fundamental problems and deficiencies, we may be setting ourselves up for another painful process of self-correction. There’s much to be gained by forging ahead boldly, and an equal amount to be lost if we place sensitive decisions in the hands of autonomous systems that just aren’t ready to make them.”