Professor Tshilidzi Marwala is the Vice-Chancellor and Principal of the University of Johannesburg. He recently penned an opinion article that first appeared in the Daily Maverick on 26 November 2020.
Currently, machines that use artificial intelligence are narrowly rather than generally intelligent. Narrow intelligent machines are good at one task but bad at all other tasks. Until these machines become generally intelligent, and develop consciousness, their existential threat to humans is minimal. But the threat is still real.
In his book Homo Deus, Yuval Harari writes that machines are becoming gods. Deus is the Latin word for god. The question that needs to be asked is whether intelligent machines are the forces of good or bad.
Intelligent machines are machines that are clever enough to do cognitive tasks that generally require human intelligence. For example, intelligent machines are now deployed in production to automate production. The advantage of automation is that it increases production efficiency and reduces the cost of goods and services. The disadvantage of this is that the automation of jobs will decimate the labour market, reduce tax revenue, and widen inequality. These machines will usher in an era of techno-feudalism.
Intelligent machines are increasingly being used for medical care, for instance. Artificial intelligence (AI) can now read medical images more accurately than human doctors, increasing the probability of saving lives. Together with other digital technologies, these AI machines can be deployed in telemedicine applications for use in rural areas where the availability of medical specialists is low. This increases the quality of health care. On the other hand, if the medical data sets used to train these AI machines were gathered elsewhere, they are not as effective in those rural areas as they are in places where the data sets were collected to train these machines. This results in these machines offering poor services to rural and poor places compared to rich places.
Technology is also changing the nature of democracy. In the US presidential elections this year, more than 150 million people participated, which is a historic high. This is largely because technology was used in the voting process. Despite the criticism of President Donald Trump, technology is more accurate and reliable in elections than human agents. Thus, technology is broadening democracy. On the other hand, bots that monitor people who are online are studying people’s behaviour, capturing their digital personality signature and nudging them to vote in a particular way. This has all sorts of implications on the concept of “free will”, and the consequence of this on democracy is substantial. It fundamentally implies that technology, rather than human attributes, can win elections.
One of the recent inventions in the field of AI is generative adversarial networks (GANs). These GANs can mimic individual traits such as voices, faces, and other attributes that are at the core of human identity. Politicians have been digitally put into compromising situations because of GANs. The problem is that the audience cannot tell the difference between what is real and what is fiction. This undermines the very essence of our existence, especially if the record of our existence, such as video footage, can be altered by GANs on an industrial scale. Karl Marx once said that science is necessary so that we can differentiate between what is real and what is fake. It seems that, with GANs, science can exacerbate our inability to distinguish truth from fiction with devastating consequences.
On the other hand, GANs are able to do beneficial things. In medical textbooks, the use of real people to generate human bodies has always been fraught with ethical issues. With GANs, synthetic people who look real are generated and used as teaching aids. This eliminates the use of real people. GANs can generate vital data that is missing from critical processes in nuclear power stations, the health sector, and other vital sectors. Therefore, GANs can be used both for good and for bad.
One of the most complicated types of chess is Go, a Chinese board game that has been played for thousands of years. The company Google DeepMind created AlphaGo that ultimately defeated a human champion in 2016. This followed the 1997 defeat of a human chess champion, Gary Kasparov, by “Deep Blue”, an intelligent machine built by IBM. The phenomenon when these machines exceed the intelligence of human beings is called the singularity. What are the implications of the fact that AI machines are exceeding human capabilities and that we can potentially reach a point of singularity? Does it mean that these machines will conspire against us and eliminate us from Earth?
Several issues stand in the way of machines conspiring to eliminate humans. The first dilemma is that these machines are narrowly rather than generally intelligent. Narrow intelligent machines are good at one task but are bad at all other tasks. For example, Alpha Go is excellent at playing a Go game but very bad at driving or making coffee. This is the same for Deep Blue, it is good at playing chess, but it cannot console a bereaved person or coach a young girl on playing soccer. Until these machines become generally intelligent, then the existential threat is minimal.
The second issue with regards to how we relate to these machines is consciousness. For a machine to construct an intention to harm, it requires consciousness. Are these machines conscious? There is a school of thought that connecting these machines using the Internet of Things can result in the emergence of consciousness in this network of intelligent machines. But this remains an open question still to be solved.
These intelligent machines have proven to discriminate racially. Face-recognition algorithms work better for people of European descent than for people of African descent. This is because there is more data on European faces than African faces. Machine language translators work better in Indo-European languages than in African languages. The more secure identification of people’s identities using the iris of the eyes works better for European eyes than for African eyes. This is because the contrast between the iris and pupil in European eyes is sharper than in the African eye. The credit-scoring algorithms based on AI in our banks discriminate against poor people. This is because there is more data on the rich than the poor.
It seems this era of the Fourth Industrial Revolution “is a tale of two worlds”, one rich and prosperous and another poor. Now the question that we ought to answer is, what do we do about this? We need to develop ethical guidelines for the use of AI in decision making. Standards on the balance of the data used must be developed to prevent this technology from discriminating. Standards for testing the robustness and the fairness of these algorithms must be developed in order to avoid biased algorithms from discriminating.
Many of the AI algorithms available for decision making are accurate in prediction, but they are not transparent. Suppose these algorithms are used for decision making, and the affected person wants a rationale for the decisions made. In that case, it is not easy to provide such explanations because these algorithms are not transparent. We need to develop algorithms that are both transparent and accurate. When it comes to these algorithms, they are both Frankensteins and gods. To make them more gods than Frankensteins, it is up to us to construct and regulate all aspects of their development and use.
This article is a response to Professor Margaret Levi in the ASSAf Presidential Roundtable on the humanities in artificial intelligence.
*The views expressed in the article is that of the author/s and does not necessarily reflect that of the University of Johannesburg.