We urgently need international agreements on human rights and the use of autonomous weapons

Professor Tshilidzi Marwala is the outgoing Vice-Chancellor and Principal of the University of Johannesburg. He is the incoming United Nations Under-Secretary-General and a Rector of the UN University. Professor Letlhokwa Mpedi is the incoming Vice-Chancellor and Principal of the University of Johannesburg.

They recently published an opinion article that first appeared in the Daily Maverick on 29 January 2023.

Perhaps one of our biggest concerns in the artificial intelligence (AI) age is the use of technology in war. We are already seeing a shift in global debates around fears of lethal autonomous weapons systems, or what have popularly been termed killer robots.

In 1942, in his short story Runaround, Isaac Asimov introduced the three laws of robotics. First, a robot shall not hurt a human or, through its inaction, allow a human to be harmed. Second, a robot shall obey any instruction given to it by a human. Third, a robot shall avoid actions or situations that could cause it to harm itself.

More than 80 years on, as artificial intelligence (AI) becomes more commonplace and takes on different characteristics to those imagined by Asimov, it is apparent that although these laws serve as a practical, ethical guideline, they merely scratch the surface.

As Mark Robert Anderson has suggested, “[Asimov] envisioned a world where these human-like robots would act like servants and would need a set of programming rules to prevent them from causing harm… [But] there have been significant technological advancements. We now have a very different conception of what robots can look like and how we will interact with them.”

One of the most critical aspects of this robot to be considered is the issue of human rights, particularly in the context of war. Perhaps, Asimov should have foretold the Universal Declaration of Human Rights, and reframed the first law to be “a robot shall not violate human rights, or by inaction allow human rights to be violated.”

The issue of robot agency comes to the fore, especially in relation to human agency. When a human being or an institution violates human rights, we can determine who is responsible and hold them accountable. When Slobodan Milošević, the former President of Yugoslavia, violated human rights, he was charged with War Crimes by the United Nations International Criminal Tribunal for the Former Yugoslavia for crimes against humanity.

Similarly, the former President of Liberia, Charles Taylor, was charged with war crimes and sentenced to life imprisonment by the International Criminal Court. In this case, this is an easy issue to handle because, as far as these crimes are concerned, an identifiable human agency is involved.

The situation becomes more complicated when autonomous machines are involved. As far as AI implementation in warfare is concerned, there are three principal ways this can be done.

First, a human in the loop. This is where an AI weapon can only attack when a human being gives the green light. In this case, if there is any human rights violation, then that human being can be held to account through mechanisms such as a court martial.

The second case is when a human is on the loop. Here a human being observes and only interferes when things go wrong. Here, the human does not have full agency because the AI weapon can deploy while the human thinks all is alright.

The third type is the human out of the loop, where the human is not involved at all, and the AI weapon has full agency. Who to hold accountable becomes complicated here. Is it the engineer who designed the system? If this is the case, how about the fact that these weapons evolve outside the control of their designer and/or manufacturer? Is it the commander who deployed the system?

Who do you hold to account? There are no easy answers to these questions. The point is even if one can establish who to hold accountable, there is still the challenge of establishing actus reus (guilty act) and mens rea (guilty mind).

Furthermore, military contractors/manufacturers often enjoy immunity from litigation in certain jurisdictions. This makes it difficult for victims of defective autonomous weapons to sue such manufacturers under product liability law.

That said, those who violate international law and trample on human rights should not be allowed to escape liability. As Mahmoud Cherif Bassiouni puts it: “Impunity for international crimes and for systematic and widespread violations of fundamental human rights is a betrayal of our human solidarity with the victims of conflicts to whom we owe a duty of justice, remembrance, and compensation.”

These issues are not far-fetched. Perhaps one of our biggest concerns in the AI age is how to bring to the fore of global governance the use of technology in war.

We are already seeing a shift in global debates around fears of lethal autonomous weapons systems (LAWS) or what has popularly been termed killer robots. In fact, in June 2021, a United Nations Security Council report on the Libyan civil war suggested that these weapons had been used to kill humans for the first time in 2020.

In December 2020, it was reported that a satellite-controlled machine gun with AI was used to assassinate Iranian nuclear scientist Mohsen Fakhrizadeh. At the Convention on Certain Conventional Weapons the following year, a consensus was not reached as to whether or not to ban these weapons, and the debate rages on.

The conversation is demonstrably more nuanced than Asimov’s laws suggest. The Universal Declaration of Human Rights states that all humans are born free and equal with the right to life, liberty and security of person.

Against the context of “just war”, for example, how do we respond? Just war theory is the justification of war under certain conditions. In 1980, David Luban defined just war as: “(i) a war in defence of socially basic human rights (subject to proportionality); or (ii) a war of self-defence against an unjust war.”

Yet, if autonomous weapons are used in this context and humans are out of the loop, who is held accountable? Mariarosaria Taddeo and Alexander Blanchard assert that “only humans are morally responsible for the actions of autonomous weapons systems (AWS). This is because intentions, plans, rights, and duties, praise or punishment can only be attributed in a meaningful way to humans.”

Similarly, the UN Group of Governmental Experts on emerging intelligent technologies in the area of Lethal Autonomous Weapons Systems of the Convention on Certain Conventional Weapons argues that human accountability cannot be transferred to machines, suggesting human responsibility for decisions must be retained.

Taddeo and Blanchard argue that in response to the use of AWS, AI systems should be interpretable; providers and defence institutions should be able to assess the predictability of weapons to limit unpredictable outcomes; decision-makers need a high level of knowledge and understanding; the cycle of development and mode of procurement should be transparent; justification of uses of autonomous weapons should be according to the principle of necessity; measures must be implemented to reduce the risks of lethal outcomes from using non-lethal weapons; a procedure to detect errors and undesirable outcomes, to evaluate their impact and costs and to define rectifying remedy actions should be developed; and auditing should be prioritised with the aim of facilitating accountability.

Of course, as our current world order suggests, the unjust war continues to perpetuate. As Elias Carayannis and John Draper resolved in a 2022 article, “Artificial superintelligence emerging in a world where war is still normalised constitutes a catastrophic existential risk.”

How do we ensure that we are safeguarding human rights in this shifting era? As UN conventions have demonstrated, there need to be laws and regulations governing the use of autonomous weapons. Additionally, AI needs to be aligned with existing human values.

In August 2021, Human Rights Watch urged governments to negotiate a treaty that sets standards for meaningful human control over lethal autonomous weapons. While some countries call for an outright ban, others call the conversation premature. Yet, as the pandemic has demonstrated, the pace of AI is unprecedented, and we must respond accordingly.

One argument is that AI should be used to predict militarised interstate disputes and that there should be a call for greater mediation tactics emphasising technology. For instance, through Natural Language Processing now aided by technologies such as Chat GPT, which analyses texts and languages, we can follow debates on social media channels to get a better understanding of the dynamics in specific regions and the influence this could have on the international community.

There also needs to be a push towards achieving the creeds set out by the sustainable development goals, as these fundamental factors often cause conflict. Better tracking systems through satellites or drones could clarify resource challenges at a state level.

Moreover, there is scope to utilise AI at a decision-making level. To paraphrase the UN, the point has to be on studying these emerging intelligent technologies so that they can be implemented to de-escalate violence, increase international stability and protect human rights.

As we navigate this brave new world, Taddeo and Blanchard’s conclusion remains stark: “In the age of autonomous warfare, [autonomous weapons] may perform immoral actions, but it is only by holding the individuals who design, develop and deploy them morally responsible that the morality of warfare can be upheld.”

It is hoped that states will wake up their minds and conclude an internationally binding instrument on LAWS before it is too late. In times like this, one can only wonder what the Dutch “father of international law” and author of De Jure Belli ac Pacis (On the Law of War and Peace), Hugo de Groot, would have contended on autonomous weapons and human rights.

*The views expressed in this article are that of the author/s and do not necessarily reflect that of the University of Johannesburg.

Share this