He recently penned an opinion article that first appeared in the Daily Maverick on 22 February 2023.
While artificial intelligence has paved the way for great advancement, the caveat is that it is also increasingly being used for harm. The proliferation of autonomous weapons, the spread of dangerous social media rhetoric, entrenched algorithmic bias and the ability of technology to exacerbate our inequalities demonstrate the peril.
A Week ago, on 16 February, the German Constitutional Court declared the use of surveillance technology by the police unconstitutional. The German Society for Civil Rights (GFF), which championed the case, argued that the software posed more harm than good as the risk of predictive policing enhanced the potential for discrimination.
Intriguingly, the technology is provided by a US company – a country in which there has been demonstrable algorithmic bias and discrimination in recent years. For instance, police use Idemia software, which uses algorithms to scan faces, yet results indicate that these algorithms were more likely to confuse black women’s faces than white women’s faces, or black or white men’s.
A statement from the German Constitutional Court says the provisions regulating the use of the technology in Hesse and Hamburg states violate the right to informational self-determination. In response, Hesse’s state minister of interior Peter Beuth argued that while the ruling was welcomed, current practices need to be made more robust and codified. We are increasingly seeing a push for regulation in the sphere of technology.
As far back as the 1950s, science fiction has sounded a warning about the future. As developments in artificial intelligence (AI) and related technologies have taken hold in recent years, CEOs of tech companies, futurists and AI practitioners have echoed this sentiment.
Stephen Hawking said at a conference in 2017 that “success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
More recently, the United Nations warned of the threat AI poses to human rights. While AI has paved the way for great advancement, the caveat is that it is also increasingly being used for harm. The proliferation of autonomous weapons, the spread of dangerous social media rhetoric, entrenched algorithmic bias and the ability of technology to exacerbate our inequalities demonstrate the peril.
As UN High Commissioner for Human Rights Volker Türk said last week, “human agency, human dignity and all human rights are at serious risk. This is an urgent call for both business and governments to develop quickly effective guardrails that are so urgently needed.”
More than 60 UN member states have since called for greater regulation to ensure security, accountability and stability. As I argued on this platform last week, AI advancements call for legal and ethical frameworks, but as history has demonstrated, legislation is often not proactive.
It is important to note that much of this technology is in its infancy, and the legal responses are just being crafted. The European Union (EU), for instance, only proposed a new legal framework for AI in 2021. It adopted a risk-based approach, which categorises the uses of AI based on the risk posed. The proposed act is the first comprehensive regulatory framework for AI.
Commentators have argued that this shift will spur the “Brussels Effect”, a term coined by Anu Bradford in 2012 denoting the EU’s influence on regulatory frameworks beyond its borders. As the EU has argued, the urgency to push through its AI framework is predicated on the Brussels Effect. Once these regulations have been approved, it could spur a much more significant shift towards establishing a framework globally.
In South Africa, there currently needs to be legislation that specifically speaks to AI. One of the recommendations of the Presidential Commission on the Fourth Industrial Revolution is to amend, create and review policy and legislation. As the report states, the objective is to empower stakeholders with technology policies that emphasise responsible use and to create a science-literate judiciary. In particular, the focus will be on data privacy and protection laws and digital taxation.
While this provides a helpful initial tool for establishing a legal framework, we need to catch up in adoption, and our scope needs to be more comprehensive to keep up with the pace of AI developments.
As Dr Dusty-Lee Donnelly of the University of KwaZulu-Natal argues, “while a core set of general principles for the ethical development of AI has emerged, those principles must still be operationalised through legal regulations… Thus existing legal principles must be adapted, or new principles developed to mitigate the risks to human well-being while not stifling innovation and leading to non-compliance.”
This is the approach we must take. Without these tangible steps, AI could emerge as a greater threat.
Importantly, this is not an undertaking that can be confined to individual states. Instead, there is a fundamental call to establish a standard set of practices and regulations internationally. There are ethical frameworks to guide this process included in company guidelines, country strategies and crafted by the UN.
This, however, is not enough, as these frameworks cannot be tangibly implemented in practice.
Perhaps, most importantly, we should heed Elon Musk’s advice. He argues: “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
*The views expressed in this article are that of the author/s and do not necessarily reflect that of the University of Johannesburg.