Colonisation by algorithms: The Fourth Industrial Revolution and the ethical use of predictive analytics

​The idea of prediction has been around for centuries, and we have always been fascinated by predictions. What makes predictions more real than ever before is that data fusion occurs, and the analytics can use your behavioural input to further its intelligence capability, writes ProfessorSaurabh Sinha and Professor Sarah Mosoetsa.

Prof Saurabh Sinha is an electronic engineer and the Deputy Vice-Chancellor: Research and Internationalisation, University of Johannesburg and Prof Sarah Mosoetsa is a sociologist and the CEO of the National Institute for the Humanities and the Social Sciences, South Africa. They recently penned an opinion piece published by Daily Maverick.

Colonisation by algorithms: The Fourth Industrial Revolution and the ethical use of predictive analytics – Prof Saurabh Sinha & Prof Sarah Mosoetsa (21 July 2021)

In the Fourth Industrial Revolution (4IR), data is contributed by users, technology and cyberphysical interfaces. Inevitably, the data and algorithms find a way to “collude”, which brings about unintended consequences that could, among others, gravely challenge the recently promulgated Protection of Personal Information Act (Popia) as well as the Consumer Protection Act (CPA), and personal and/or organisational cybersecurity.

This new “revolution” offers opportunities for the coming together of engineering and the humanities.

Just before the lockdown, the writers of this article headed for coffee at one of Johannesburg’s malls. Arriving at our favourite coffee shop, we placed our order for cappuccino and caffè Americano and sat down for a conversation about the success of a research project in the humanities and social sciences. We then took a photo and shared it with other participants of the research project and uploaded the image to Facebook.

Afterwards, on Facebook, I saw several images of caffè Americano and cappuccino, inscribed with messages such as “where to buy”, “how to make”, etc. For fun, I remarked, “our phone is listening to us”, to which someone else quipped, “Hey, Google,” to which our phones buzzed in response.

Many of us have had this experience. It feels very much as if your phone shares a sixth sense or has the ability to listen to your conversations. You are thinking of something and as you browse your social media profile or the internet, you see adverts. Relating to those adverts, reflecting your desires, you see corresponding images. For example, on your social media timeline, you may “Like” a particular brand of chicken wings, and later you see those chicken wings in the hands of a politician you were not going to vote for.

Subconsciously, you start to build some appreciation for this politician, for at least the chicken wings offered some common ground! Because you “liked” the politician having the chicken wings, you click “Like” or hover over the image for a slightly longer time. Without your direct knowledge, you have contributed to a data set. The politician’s campaign platform receives the data and merges your thinking, ideas that you and the individual support; you are slowly starting to “like” the politician.

So, what is going on? Are your devices listening to you? Following the deliberations regarding privacy from Google, Facebook, Apple and others, you have been mildly convinced that the devices are not directly listening to all your conversations. Like flipping a switch, “activation” occurs when you say, “Alexa”, “Ok, Google” and so on. But behind the scenes much is happening: you are constantly contributing data to your mobile phone applications.

You have most likely used Google Maps or another calendaring app to navigate to your destination or schedule the meeting, and you may have just “checked in” to the coffee shop, and so on. These data points converge to give the mobile app or social media platform a “sixth sense” – that you have most likely ordered that Americano, and because you are with Sarah Mosoetsa (who usually orders the cappuccino), the platform has correlated the information. This is a form of predictive analytics.

The idea of prediction has been around for centuries, and we have always been fascinated by predictions. What makes predictions more real than ever before is that data fusion occurs, and the analytics can use your behavioural input to further its intelligence capability.

Predictive analytics could have multiple usages: for example, based on evolving intelligence and computing power, it could preempt when your nearby electricity substation requires maintenance. Based on weather changes or load shedding impact on a water pump or tower, it could preempt water-shedding or provide data to even better manage the Covid-19 pandemic. On the other hand, it could bring together data to show a terrorist or insurgency group where to go next.

Closer to home, you may have decided to change your vote, based on advancing preference data and/or the chicken wings, and thus contributed to a local government election result that you never desired.

Let us provide another example: a local fuel company has designed an algorithm to optimise profit. Meanwhile, you are in a Smart car shortly requiring refuelling. The car automatically identifies a fuel station. However, you are not a fan of e-tolls, and you have thus selected a mapping algorithm that avoids e-tolls. Your car’s mapping algorithm “colludes” with the fuel company’s desire to maximise profit, and for all of this, you drive the extra distance. You do not at all feel bad about this – you were, after all, driving along the Garden Route, had purchased chicken wings and were enjoying a coastal view. It feels like a win-win situation for all involved.

The situation may, however, not be as green as described here – was it ethical or even legal for the algorithm(s) to take you the extra mile to maximise the fuel company’s profit? For most, the answer would be “no”. As we move from artificial to augmented intelligence, algorithms will do even more to “optimise” your “convenience.” One could think of this as colonisation by algorithms.

African insights and ethical intelligence

With the goal of contributing to decolonisation of knowledge, one of the online modules at the University of Johannesburg teaches African Insights (the acronym happens to be AI!). AI brings to the forefront leadership and African value systems, for example, ubuntu. Ubuntu is also a value system, “I am, because you are.” How does one map this AI to augmented/artificial intelligence, and so help build ethical intelligence?

In the design of 4IR machines, machine-to-machine and person-to-machine interfaces, the Institute of Electrical and Electronics Engineers is advancing the principle of ethically aligned design (EAD). In our expression, EAD is thinking that borrows from principles of the humanities and social sciences, from value systems such as ubuntu, and maps these onto how algorithms think, utilise data and cooperate. Ubuntu is also an open-source computer operating system developed by Mark Shuttleworth. In open-source software development, ubuntu is advanced by bringing together the power of various computer programmers – to use and leverage the power of this community.

In EAD, people (and their value systems) and machine intelligence will fuse towards a form of collective intelligence. This creates one possibility for machines to become ethically and perhaps also emotionally intelligent; there is therefore some hope that African insights, with artificial intelligence, could be a unique 4IR contribution.

Similar examples can also be found in the growing field of “digital humanities”, which seeks to bring together philosophy and algorithms, ethical studies and robotics, the arts and fashion design with computing, as well as historiography and electronic texts. New studies of the future of work and workers also offer new insights. While robotics offers new possibilities for the future of work, consideration for workers (human beings) should remain the primary fixation of any algorithm.

These turning points and the coming together of engineering and humanities offer a novel move towards humanising robotics, opportunities of consolidating democracies, encouraging “freedoms and capabilities” of all citizens, and freedom from new forms of fascism.

The views expressed in this article are that of the author/s and do not necessarily reflect that of the University of Johannesburg.

prof saurabh sinha

Share this

Latest News

All News