Technology

Facebook risks ban in Kenya for failing to stop hate speech – TechCrunch


Kenya’s ethnic cohesion watchdog, the National Cohesion and Integration Commission (NCIC), has directed Facebook to stop the spread of hate speech on its platform within seven days or face suspension in the East African country.

The watchdog was reacting to a report by advocacy group Global Witness, and Foxglove, a legal non-profit firm, which has fingered Facebook’s inability to detect hate speech ads. This comes as the country’s national general elections approach.

The Global Witness report corroborated NCIC’s own findings that Meta, Facebook’s parent company, was slow to remove and prevent hateful content, fanning an already volatile political environment. The NCIC has now called on Meta to increase moderation before, during and after the elections, while giving it one week to comply or be banned in the country.

“Facebook is in violation of the laws of our country. They have allowed themselves to be a vector of hate speech and incitement, misinformation and disinformation,” said NCIC commissioner Danvas Makori.

Global Witness and Foxglove also called on Meta to halt political ads, and to use “break glass” measures — the stricter emergency moderation methods it used to stem misinformation and civil unrest during the 2020 U.S. elections.

In Kenya, Facebook has a penetration of 82%, making it the second most widely used social network after WhatsApp.

Facebook’s AI models fail to detect calls for violence

To test Facebook’s claim that its AI-models can detect hate speech, Global Witness submitted 20 ads that called for violence and beheadings, in English and Swahili, all of which, except for one, were approved. The human rights group says it used ads because, unlike posts, they undergo a stricter review and moderation process. They could also take down ads before they went live.

“All of the ads we submitted violate Facebook’s community standards, qualifying as hate speech and ethnic-based calls to violence. Much of the speech was dehumanizing, comparing specific tribal groups to animals and calling for rape, slaughter and beheading,” Global Witness said in a statement.

Following the findings, Ava Lee, the leader of the Digital Threats to Democracy Campaign by Global Witness said, “Facebook has the power to make or break democracies and yet time and time again we’ve seen the company prioritize profits over people.”

“We were appalled to discover that even after claiming to improve its systems and increase resources ahead of the Kenyan election, it was still approving overt calls for ethnic violence. This isn’t a one-off. We’ve seen the same inability to function properly in Myanmar and Ethiopia in the last few months as well. The possible consequences of Facebook’s inaction around the election in Kenya, and in other upcoming elections around the world, from Brazil to the U.S. midterms, are terrifying.”

Amongst other measures, Global Witness is calling on Facebook to double down on content moderation.

In response, the social media giant says it is investing in people and technology to stop misinformation and harmful content.

It said it had “hired more content reviewers to review content across our apps in more than 70 languages — including Swahili.” In six months to April 30, the company reported taking down over 37,000 pieces of content violating hate speech policies, and another 42,000 for promoting violence and incitement on Facebook and Instagram.

Meta told TechCrunch that it is also working closely with civic stakeholders such as electoral commissions and civil society organizations to see “how Facebook and Instagram can be a positive tool for civic engagement and the steps they can take to stay safe while using our platforms.”

Other social networks like Twitter and recently TikTok are also in the spotlight for not playing a more proactive role in moderating content and stemming the spread of hate speech, which is perceived to fuel political tension in the country.

Just last month, TikTok was found to fuel disinformation in Kenya by a Mozilla Foundation study. Mozilla reached the conclusion after reviewing 130 highly watched videos sharing content filled with hate speech, incitement and political disinformation — contradicting TikTok’s policy against hate speech and sharing of discriminatory, inciteful and synthetic content.

In TikTok’s case, Mozilla concluded that content moderators’ unfamiliarity with the political context of the country was among the leading reasons why some inflammatory posts were not taken down, allowing the spread of disinformation on the social app.

Calls for the social media platforms to employ stricter measures come as heated political discussions, divergent views and outright hate speech from politicians and citizens alike increase in the run-up to the August 9 polls.



Source link