Search
Close this search box.

Facebook: A cog in the machinery of violence

Professor Tshilidzi Marwala is the Vice-Chancellor and Principal of the University of Johannesburg. He recently penned an opinion article, published in the City Press on 15 June 2020.

For all its popularity as a social networking site worldwide, the limitations and dangers associated with Facebook continue to blight the company. Between the Friday afternoon and Sunday evening that followed the initial antipolice brutality protests in the US, the fractured lines within Facebook became starker than ever before.

In the same stroke of a brush, or post of status if you will, Facebook chief executive officer (CEO) Mark Zuckerberg condemned George Floyd’s killing at the hands of the police but stood in silence at incited violence against protesters. As someone in the communication superway, he must know the old saying that silence means consent.

The contradictory nature of Zuckerberg’s response to the highly politically and racially charged landscape perhaps best amplifies Facebook’s historic role as a tool that may have inadvertently been used for violence.

In the aftermath of Floyd’s death, few brands and public figures dared to remain silent, and Facebook and Zuckerberg were no exception. In solidarity with the black community, Zuckerberg said in a post, in which he vowed that Facebook would do more to support the community’s equality and safety: “It’s clear that Facebook has more work to do to keep people safe and ensure that our systems don’t amplify bias.”

Yet this introspective take came as Zuckerberg almost unilaterally made a decision to not take down incendiary posts in which US President Donald Trump appeared to call for violence against protesters.

The move triggered a mass virtual walkout by the company’s employees in the days following the decision. Matters were made worse when it was revealed that while Zuckerberg had talked to Trump right before the decision was taken, he had not consulted with civil rights leaders at the helm of the protests. Despite the backlash, Zuckerberg has been steadfast in his stance that Facebook’s principles and policies, which support free speech, indicate that this was the right action to take.

When free speech is in conflict with human rights, what is the best course of action? Do you pursue free speech at the expense of human rights or pursue human rights at the expense of free speech? Pressure has mounted on Facebook after Snapchat joined Twitter in doing what was once thought unimaginable by limiting the reach of Trump’s social media posts out of concern that his rhetoric would incite violence. The subsequent events are but a microcosm of Facebook’s often contradictory stance.

Far from being a simple social media site that allows users to post a status or share an article, Facebook has played a pivotal role in shaping public thought, which has often sparked action. In his book Future Politics: Living Together in a World Transformed by Tech, English lawyer Jamie Susskind explains that technology has increasingly allowed us to read the news that confirms our views and beliefs while filtering out the information we do not agree with.

Our News Feeds on Facebook are primarily tailored to our beliefs. “Problematically, this means that the world I see every day may be profoundly different from the one you see,” he says.

In 2018, about 4 000 documents leaked from Facebook to American news network NBC showed that Zuckerberg controlled competitors by treating user data as a bargaining chip. According to an in-depth report, the company allegedly granted access to partners that spent money on Facebook or shared their data with Facebook, as well as with application developers considered personal friends of Zuckerberg.

The impact of this is far-reaching. In the same year, Zuckerberg testified before Congress after it was revealed that British research and consulting firm Cambridge Analytica had used the personal data of about 87 million people’s Facebook profiles, without their consent, for political advertising. How did this happen? Cambridge Analytica created a survey for academic use in which thousands of Facebook users participated.

Facebook allowed the application to collect the personal information of all those in the participants’ Facebook network. With this data, Cambridge Analytica was able to create psychographic profiles for political campaigns, which included the type of advertisement that would be most useful to users.

At the time, Zuckerberg told Congress: “Facebook is an idealistic and optimistic company, but it is clear now that we did not do enough to prevent these tools from being used for harm as well. That goes for fake news, foreign interference in elections and hate speech, developers and data privacy. We did not take a broad enough view of our responsibility, and that was a big mistake.”

The even starker, darker underbelly of this is that Facebook has been used as an effective tool to stoke extremism. In fact, Facebook researchers found that 64% of all users who join an extremist group do so based on the company’s recommendation tools. As internet activist Eli Pariser puts it: “Left to their own devices, personalisation filters serve up a kind of invisible auto-propaganda, indoctrinating us with our ideas, amplifying our desire for things that are familiar and leaving us oblivious to the dangers lurking in the dark territory of the unknown.”

”While Facebook may be merely a cog in all of this machinery, it is hardly a passive observer.”

Yet Facebook has a long history of sitting on the sidelines. The company admitted that it had not done enough to prevent violence and hate speech in Myanmar during the 2017 Rohingya genocide. A report by a San Francisco-based not-for-profit organisation BSR found that: “Facebook has become a means for those seeking to spread hate and cause harm [in Myanmar], and posts have been linked to offline violence.”

Similarly, hate speech targeted at minorities in the northeastern Indian state of Assam has relentlessly spread through Facebook. In another instance, a University of Warwick study found that attacks on refugees in Germany increased by about 50% where Facebook use was prevalent.

While the company has declared that it is cracking down on extremist content, this move relies largely on automated systems which remove violent images and inappropriate content. As demonstrated by the furore around Trump’s posts, the move also depends on the origin of extremist content. Yet even with this action, as a whistle-blower claimed last year that in some instances the software has inadvertently created the kind of posts it is supposed to combat for extremist organisations, in a fashion similar to the birthday reel it automatically creates for users.

While Facebook may be merely a cog in all of this machinery, it is hardly a passive observer. It has stoked fears and incited violence by standing by under the guise of free speech. As employees revolt and users take notice, could this be Facebook’s downfall? Or could this just be a repeat of recent history – another overlooked misstep for a CEO and his social media empire?

The views expressed in this article are that of the author/s and do not necessarily reflect that of the University of Johannesburg.

prof tshilidzi marwala
Prof Tshilidzi Marwala Vice- Chancellor & Principal of the University of Johannesburg
Share this