Letlhokwa Geoge Mpedi is the vice-chancellor and principal of the University of Johannesburg. Tshilidzi Marwala is a rector of the United Nations University and UN under-secretary-general.
They recently published an opinion article that first appeared in the Mail & Guardian on 10 April 2025.
The constructive resolution of labour disputes through established procedures is vital for maintaining labour peace and stability. With developments in artificial intelligence (AI), there may be room to consider the implementation of these technologies in the South African context.
A damning indictment of the Commission for Conciliation, Mediation and Arbitration (CCMA) in 2021 showed that some workers felt disillusioned with the current system. While the effect of the Covid-19 pandemic was undoubtedly a factor at the time. Journalist Magnificent Mndebele also outlined that budget cuts had crippled the CCMA thus leaving workers vulnerable to unfair dismissals and exploitation and some critics warned of a deepening crisis in labour rights.
The numbers speak to this. It is estimated that the CCMA handles more than 100 000 cases annually. But about 40% of employees lose cases at arbitration.
Labour disputes are inevitable, so it’s essential to contain them within reasonable limits. An effective dispute settlement system helps manage conflicts and promotes industrial peace, which supports development, economic efficiency and social equity, according to the International Labour Organisation (ILO).
As the labour market evolves in the digital age, dispute resolution mechanisms must also adapt, raising several pertinent questions. For instance, what if artificial intelligence (AI) had a hand in the adjudication of labour disputes between employers and their employees? What if the outcome of a wrongful termination case or a collective bargaining agreement dispute was decided upon by algorithms? What if employers could flag potential compliance issues before they become legal problems? Would this help or hinder the employment relations?
As we considered the magnitude of this commission and the scope of its work, we pondered whether technology could be the very injection required. If AI played a role in adjudicating employee disputes, it could analyse large volumes of workplace cases, assess precedents and suggest outcomes based on established legal principles.
For example, in a wrongful termination case, AI could review company policies and similar historical cases alongside the relevant labour laws to determine whether the termination was justified. It might flag potential biases in decision-making by comparing outcomes across different demographics, ensuring consistency and fairness.
Platforms such as Modria, Configero and Strategic Labor already facilitate online dispute resolution, indicating that there is certainly precedent for this in the workplace.
If we consider the challenges associated with the CCMA, traditional dispute resolution can often be slow and costly. AI can reduce backlogs in labour courts and arbitration bodies while also cutting costs for employees and employers.
AI-powered systems can also help HR departments detect patterns of discrimination or unfair dismissals, making workplace conflict resolution more data-driven and systematic. In other words, employment disputes could be solved more effectively before even approaching the CCMA.
Other considerations outlined in some of Mpedi’s research include AI language translators, which can alleviate language barriers in the legal system, making justice more inclusive. These tools can translate legal jargon, facilitate communication between non-English speakers and legal experts, and improve the precision of legal proceedings. online dispute resolution (ODR) presents an innovative approach to justice by leveraging digital capabilities for affordability, accessibility and convenience.
This requires addressing technology literacy issues, ensuring digital access and implementing robust training and ethical frameworks to prevent further marginalisation and uphold privacy.
But AI-powered legal chatbots and virtual assistants offer hope by providing accessible legal information, aiding in tasks like form completion, and offering personalised guidance. These tools empower users to understand legal options.
Using deep learning neural networks, AI-driven predictive analytics can bring transparency and efficiency to the legal system. These tools forecast potential outcomes by analysing historical cases and data, optimising resource allocation, and informing legal policy.
These technological advancements certainly have the potential to make justice more accessible and efficient. Across the world, we are already seeing these models being implemented and we should follow suit.
In the First Institute in 1628, Sir Edward Coke said: “Reason is the life of the law; nay, the common law itself is nothing else but reason … The law, which is perfection of reason.”
The very principle that “reason is the life of the law” suggests that legal decisions must be grounded in rationality and fairness. Regarding AI in employment adjudication, there are interesting parallels and tensions with Coke’s principle.
AI systems can potentially promote consistency by applying the same analytical framework across similar cases, reducing the effect of human biases or inconsistencies that might affect traditional adjudication.
But issues such as algorithmic bias and accountability in case of errors would need to be addressed through transparency and oversight. This, in the context of AI, is a very real consideration. In a research brief for Boston University, Ngozi Okidegbe, Boston University’s Moorman-Simon Interdisciplinary Career Development Associate Professor of Law and an assistant professor of computing and data sciences argues: “In theory, if the predictive algorithm is less biased than the decision-maker, that should lead to less incarceration of black and indigenous and other politically marginalised people. But algorithms can discriminate.”
We have already seen this across the criminal justice system in the United States. A 2016 ProPublica investigation found that an AI system used for predicting criminal behaviour was biased against black defendants, which reinforced racial disparities in sentencing. A concern, of course, is that these systems are often created in the West and transplanted in our context. This makes the risk accompanied by these embedded biases worrying.
In 2021, Marwala’s research indicated a two-fold solution for this. First, using the “wisdom of the crowd”, or a diverse and sufficiently large collective opinion, can minimise noise and bias in decision-making, including within the judiciary.
Second, a quantitative approach suggests AI can predict court case outcomes more accurately than lawyers and may even improve judicial judgments. AI can assess risks by analysing evidence, attorney presentations, and witness testimonies, offering rational sentencing recommendations based on reoffending probabilities.
Other research suggests short-term policy recommendations such as education, training and transparency measures, followed by medium- and long-term initiatives such as legal synchronisation, accountability frameworks and AI regulation.
That said, as the saying goes, the more things change the more they stay the same. The use of AI in employment dispute adjudication must respect the established principles of natural justice — the right to be heard and no one should be a judge of their own cause — and procedural fairness.
This is easier said than done, especially when one considers the challenges associated with AI such as bias and discrimination, lack of transparency and explainability, ethical concerns and data accuracy and reliability
The future of AI in adjudicating employment disputes in South Africa probably involves a hybrid approach — along the lines of some of Marwala’s previous recommendations. The future may very well entail AI analysing disputes and providing recommendations, but it will still be important to adopt a human-in-the-loop approach to ensure fairness.
To return to the thought of Coke, reason probably encompasses more than just logical consistency. The legal system also values practical wisdom, contextual understanding, moral reasoning and even empathy — qualities that current AI systems may struggle to replicate.
Legal reasoning often requires balancing competing principles, understanding nuanced human situations and exercising judgment in ways that may be challenging for algorithmic approaches.
And so, the idea is not to completely replace humans with AI. It would, however, be myopic to ignore AI’s benefits, particularly cognisant of our socioeconomic context. As author Cory Doctorow explains it, “Algorithmic decision tools, overseen by humans, seem to hold out the possibility of doing the impossible and having both objective fairness and subjective discretion. Because it is grounded in computable mathematics, an algorithm is said to be ‘objective’: given two equivalent reports of a parent who may be neglectful, the algorithm will make the same recommendation as to whether to take their children away. But because those recommendations are then reviewed by a human in the loop, there’s a chance to take account of special circumstances that the algorithm missed. Finally, a cake that can be both had, and eaten.”
*The views expressed in this article are that of the author/s and do not necessarily reflect that of the University of Johannesburg.