Technology

Meta’s first human rights report is largely self-congratulatory – TechCrunch


Meta’s first human rights report is largely self-congratulatory – TechCrunch

Meta today released its first annual human rights report, which — in the company’s words — includes “insights and actions from [Meta’s] human rights due diligence on products, countries, and responses to emerging crises.” The 83-page report, covering the years 2020 and 2021, strikes a largely self-congratulatory tone, defending Meta’s misinformation strategy while failing to touch on allegations of biased content moderation.

Regulators and civil rights groups have claimed over the years that Meta fails to put in place proper safeguards against hate speech both in the U.S. and in countries like Myanmar, where Facebook has been used to promote violence against minorities. There’s evidence to suggest that Meta’s business practices have played a role in abuses from digital redlining to the insurrection at the U.S. Capitol. Meta itself has acknowledged this (to a degree); an internal study conducted by the company that found the majority of people who join extremist groups do so because of the company’s recommendation algorithms.

The human rights report, spearheaded by Meta human rights director Miranda Sissons, who joined the company three years ago, contains little in the way of revelations. Meta claims that it’s struck a “balance” between freedom of expression and security, with policies to fight health misinformation and emerging implicit threats. In the report, the company also explores the privacy and safety risks associated with Ray-Ban Stories, its glasses that can record photos and videos, including how data from the glasses could be stored and searched in the cloud.

But the report glosses over — among other topics — Meta’s efforts to date in India, where its products have often been overwhelmed with inflammatory content, reporting by The Wall Street Journal and others has shown. Meta commissioned an assessment of its India operations in 2020 from the law firm Foley Hoag LLP, but the report today contains only a summary of that assessment and Sissons has said that Meta doesn’t plan to release it in its entirety.

In the summary, Foley Hoag analysts note the potential for Meta’s platforms to be “connected to salient human rights risks caused by third parties,” including “advocacy of hatred that incites hostility, discrimination, or violence.” Meta says that it’s studying the recommendations but hasn’t yet committed to implementing them; human rights groups have accused the company of narrowing the assessment’s scope and delaying its completion.

As Engadget points out, the report also avoids delving into the implications of the metaverse — an increasingly hairy space where it concerns human rights. Reports suggest that the metaverse as it exists today across Meta’s products — a mix of social virtual reality experiences — has a sexual assault and moderation problem. One corporate watchdog documented misogynistic and racist comments, insufficient protections for children and a reporting system that left the door open for repeat offenders.

Meta has commissioned various ad hoc assessments of its operations in recent years, including in Indonesia, Sri Lanka, Cambodia and Myanmar. High-profile leaks and hearings have increased pressure on the company to show that it’s making progress on stemming the tide of harmful content. Sissons told CNBC that about 100 people are now working on human rights-related issues at Meta, and that the size of the team she directly oversees has grown to eight people.



Source link