Should we ensure that machine-learning algorithms are designed to be fair, unbiased and are driven by the principle of fairness rather than the principle of maximisation of profit, asks Prof Tshilidzi Marwala.
The Vice-Chancellor and Principal of the University of Johannesburg (UJ) and the author of Handbook of Machine Learning, recently penned an opinion piece published by the Sunday Independent.
Should machine learn the world as it is or as we wish it to be? – Prof Tshilidzi Marwala
Last week I spoke at the Times Higher Education Emerging Economies Summit in Doha in Qatar on whether machines should learn from the real world or the imagined world. As I was preparing for this talk, I remembered the trip I took to Singapore with three of the academics of the University of Johannesburg whose names are Mkhuseli Baloyi, John Smith and Peter Jones. These are not their real names so I have used fictitious names in order to protect their identities.
When we left Singapore at the airport, I noticed that there was no one who was assisting us to depart but a machine. What we did was to put the pages of our passports with our pictures on the machine. There was a camera, which captured our faces and a machine-learning algorithm compared the picture in the passport with the picture captured by the camera. If the two pictures matched, then the gate automatically opened allowing a passenger to enter. If the two pictures did not match, then the person takes her/his credentials to a human entry controller. Furthermore, the machine is able to match images to the InterPol database. Machine learning is a branch of artificial intelligence that uses statistics and the mechanism of the human brain to construct an intelligent machine.
I was the first one to arrive and I put my passport on the machine, and the camera captured my face. The machine-learning algorithm could not match the picture on my passport to my face as captured live on camera so a human controller had to assist me. The second person to arrive was John and he presented his credentials as I had done and the system could match the picture on his passport to the picture captured live by the camera. The third person to arrive was Mkhuseli who presented his credentials as I had done and the machine-learning algorithm could not match his passport photo to the photo captured live by camera and he was also denied automatic access. Then the last person to arrive was Peter and the machine-learning system allowed him to pass. What was common between Mkhuseli and I for us to be denied access by the machine-learning algorithm? It was that we were black Africans. What was common between John and Peter for them to be given access by a machine-learning algorithm? They were of European descent. Why is this machine-learning algorithm discriminating people of Sub-Saharan African descent?
It is because machine-learning algorithms that are currently in use are largely trained using data gathered in North America, Europe and Asia and not by data that is gathered in Empangeni where Mkhuseli comes from or Duthuni where I come from. Of course, in the Singapore case – the specific data of Mkhuseli and I is now recorded. This recording means that, in future, the Singapore entry for us is likely to be fine. The point, however, remains. Why is it that more data is gathered in Tokyo or New York or London than is gathered in Lagos or Johannesburg or Kinshasa? It is because an average person in Lagos, Johannesburg and Kinshasa is poorer than an average person in Tokyo, New York and London.
Companies that seek to maximise shareholder returns by maximising profit create machine-learning algorithms that are used to automate these tasks. In economics, companies that act in order to maximise profit are known as rational companies. Because these companies are driven by profit maximisation, they invest more resources to data gathering in Tokyo, New York and London because the average person in these cities is wealthier than they invest in Lagos or Johannesburg and Kinshasa. This wealth distribution is real and, therefore, these companies are investing based on economic data, as it exists. Therefore, these machines are learning the world as it is but by doing so they are inevitably discriminating against me, Mkhuseli and other Sub-Saharan Africans.
How about if we can imagine a world where we assume that the economic differences between Tokyo, New York and London nexus and Lagos or Johannesburg and Kinshasa nexus do not exist. If this is the case, then these machine-learning companies will gather as much data from Tokyo, New York and London as they gather from Lagos or Johannesburg and Kinshasa. If this is the case, then these machine-learning systems will no longer discriminate against people of Sub-Saharan descent. However, from an economic perspective these machine-learning systems have to be trained on economic unreality of economic equality for them not to discriminate. If they are trained on economic reality of economic inequality then discrimination is inevitable. How do we untangle this dilemma between reality and unreality, which, respectively, leads to discrimination and fairness?
Firstly, we need to understand that technology follows the characters of its makers. If its makers create technology without regard to human safety, then it can easily become a danger to society. When the Nazis created technology with the intention to murder people because of their race and religion, the result was genocide. When Dr Wouter Basson created technology with the sole purpose of murdering people, the result was death of innocent people. Therefore, it is important that we ensure primarily that technology is regulated to protect people. The first principle that we should adopt as far as technology is concerned is that it should not kill or harm people.
The second principle we should adopt is that technology should not go against the principles of human rights and dignity. The concept of discrimination whether done by humans or intelligent machines is against the Universal Declaration of Humans Rights. For us to enforce this principle, we should adopt an additional principle that ensures that economic interests should not supersede the principle of human rights. In the first industrial revolution, machines were used to improve the means of production. This made Britain a very wealthy nation and because of this revolution, Britain overtook China and India as the wealthiest country. Nevertheless, what is not put to the fore when we talk of the first industrial revolution is the use of child labour to help maximise profit.
The third principle is that we should embed into technology our values. A classic example to illustrate this a self-driving car that is travelling at 120 km/h and it encounters a pedestrian. If it can possibly only do two things, should it save a pedestrian and kill a passenger or should it save the passenger and kill the pedestrian? What should this self-driving car do if the passenger is a 60 years male person and the pedestrian is a girl who is 8 years old? To answer these questions, we need to interrogate our core values and embed these values into these self-driving cars. How about if as a country we do not have the means to design these self-driving cars, how do we ensure that we embed our values into the cars that we import?
In conclusion we should ensure that these machine-learning algorithms are designed to be fair, unbiased and are driven by the principle of fairness rather than the principle of maximisation of profit.
- The views expressed in this article are that of the author/s and do not necessarily reflect that of the University of Johannesburg.