Social media platforms must regulate disinformation and hate speech


There has been an explosion of misinformation, disinformation and hate speech on Kenyan social media over the past two weeks, from politicians, bloggers, supporters and others who have blatantly misrepresented how they are doing, how their opponents are allegedly acting, and how the Independent Electoral and Boundaries Commission (IEBC) was conducting the elections. To inform users of the potential for misinformation, Twitter has flagged false news sources with a disclaimer.

The number of active Facebook users in Kenya is 9.95 million, representing 18% of the population. Additionally, YouTube, Instagram, LinkedIn, Snapchat, and Twitter each have 9.26, 2.50, 1.75, and 1.35 million users. The market dominance of these companies means they control huge amounts of information, unilaterally dictating what content can be displayed and regulating online discourse through opaque processes.

Online activity and freedom of expression are governed by the Constitution and laws. Among them is Article 33 of the Constitution, which enshrines freedom of expression while prohibiting hate speech, advocacy of hatred, defamation, incitement to violence and discrimination.

Other laws include the Penal Code, the controversial Computer Misuse and Cybercrime Act and the National Cohesion and Inclusion Commission (NCIC) Act which prohibits hate speech, the dissemination of false publications and incitement to violence. It should be noted that critics of the False Publications Act argue that it is too subjective and wrongly makes the state the arbiter of truth.

There are growing concerns that social media platforms threaten democracy, freedom of choice, national cohesion and other human rights. During the 2016 election, Cambridge Analytica used the personal data of 87 million individuals to manipulate politics through micro-targeted messages. Personal information was used to create personality profiles, based on which political messages were organized. A controversial election took place in Kenya in 2017 that Cambridge Analytica also worked on.

The Institute for Strategic Dialogue reported in June 2022 that Islamic State and Al Shabaab were recruiting terrorists in East Africa, including Kenya, through Facebook. In another report, it was revealed that online influencers were being paid to attack activists and judges over proposed constitutional changes.

In the Global South, social media platforms such as Facebook and WhatsApp are disproportionately used as the primary means of internet access. Therefore, the actions of these platforms have a profound effect on freedom of expression and access to information. Although the platforms make millions in countries like Kenya, they don’t invest as much in content moderation as they do in Europe or the Americas.

As a result, there are fewer poorly trained content regulators and fewer custom AI programs available to flag, review, and remove inappropriate content. Another risk is that reviewers and AI systems designed to identify and flag inappropriate content do not understand local languages ​​and contexts, which could lead to further harm.

While national laws protect against defamation, harassment, incitement, obscenity, terrorist recruitment, and child abuse, social media companies enforce their own terms of use. It is also imperative to ensure transparency regarding the rules, tools, standards and actions taken by social media platforms in Kenya and Africa.

Perhaps we should put in place redress and grievance mechanisms that are legitimate, accessible, predictable, fair, compatible with rights and transparent. Recently, the Council for Responsible Social Media, a nonpartisan group of experts, CSOs and prominent Kenyans, called on these platforms to invest more in content moderation and publicly commit to a code of practice. transparent.


Comments are closed.