Technology

UK wants to squeeze freedom of reach to take on internet trolls – TechCrunch


The UK government has announced (yet) more additions to its expansive and controversial plan to regulate online content — aka the Online Safety Bill.

It says the latest package of measures to be added to the draft are intended to protect web users from anonymous trolling.

The Bill has far broader aims as a whole, comprising a sweeping content moderation regime targeted at explicitly illegal content but also ‘legal but harmful’ stuff — with a claimed focused of protecting children from a range of online harms, from cyberbullying and pro-suicide content to exposure to pornography.

Critics, meanwhile, say the legislation will kill free speech and isolate the UK, creating splinternet Britain, while also piling major legal risk and cost on doing digital business in the UK. (Unless you happen to be part of the club of ‘safety tech’ firms offering to sell services to help platforms with their compliance of course.)

In recent months, two parliamentary committees have scrutinized the draft legislation. One called for a sharper focus on illegal content, while another warned the government’s approach is both a risk to online expression and unlikely to be robust enough to address safety concerns — so it’s fair to say that ministers are under pressure to make revisions.

Hence the bill continues to the shape-shift or, well, grow in scope.

Other recent (substantial) additions to the draft include a requirement for adult content websites to use age verification technologies; and a massive expansion of the liability regime, with a wider list of criminal content being added to the face of the bill.

The latest changes, which the Department of Digital, Culture, Media and Sport (DCMS) says will only apply to the biggest tech companies, mean platforms will be required to provide users with tools to limit how much (potentially) harmful but technically legal content they could be exposed to.

Campaigners on online safety frequently link the spread of targeted abuse like racist hate speech or cyberbullying to account anonymity, although it’s less clear what evidence they’re drawing on — beyond anecdotal reports of individual anonymous accounts being abusive.

Yet it’s similarly easy to find examples of abusive content being dished out by named and verified accounts. Not least the sharp-tongued secretary of state for digital herself, Nadine Dorries, whose tweets lashing an LBC journalist recently led to this awkward gotcha moment at a parliamentary committee hearing.

Point is: Single examples — however high profile — don’t really tell you very much about systemic problems.

Meanwhile, a recent ruling by the European Court of Human Rights — which the UK remains bound by — reaffirmed the importance of anonymity online as a vehicle for “the free flow of opinions, ideas and information”, with the court clearly demonstrating a view that anonymity is a key component of freedom of expression.

Very clearly, then, UK legislators need to tread carefully if government claims for the legislation transforming the UK into ‘the safest place to go online’ — while simultaneously protecting free speech — are not to end up shredded.

Given internet trolling is a systemic problem which is especially problematic on certain high-reach, mainstream, ad-funded platforms, where really vile stuff can be massively amplified, it might be more instructive for lawmakers to consider the financial incentives linked to which content spreads — expressed through ‘data-driven’ content-ranking/surfacing algorithms (such as Facebook’s use of polarizing “engagement-based ranking”, as called out by whistleblower Frances Haugen).

However the UK’s approach to tackling online trolling takes a different tack.

The government is focusing on forcing platforms to provide users with options to limit their own exposure — despite DCMS also recognizing the abusive role of algorithms in amplifying harmful content (its press release points out that “much” content that’s expressly forbidden in social networks’ T&Cs is “too often” allowed to stay up and “actively promoted to people via algorithms”; and Dorries herself slams “rogue algorithms”).

Ministers’ chosen fix for problematic algorithmic amplification is not to press for enforcement of the UK’s existing data protection regime against people-profiling adtech — something privacy and digital rights campaigners have been calling for for literally years — which could certainly limit how intrusively (and potentially abusively) individual users could be targeted by data-driven platforms.

Rather the government wants people to hand over more of their personal data to these (typically) adtech platform giants in order that they can create new tools to help users protect themselves! (Also relevant: The government is simultaneously eyeing reducing the level of domestic privacy protections for Brits as one its ‘Brexit opportunities’… so, er… 😬)

DCMS says the latest additions to the Bill will make it a requirement for the largest platforms (so called “category one” companies) to offer ways for users to verify their identities and control who can interact with them — such as by selecting an option to only receive DMs and replies from verified accounts.

“The onus will be on the platforms to decide which methods to use to fulfil this identity verification duty but they must give users the option to opt in or out,” it writes in a press release announcing the extra measures.

Commenting in a statement, Dorries added: “Tech firms have a responsibility to stop anonymous trolls polluting their platforms.

“We have listened to calls for us to strengthen our new online safety laws and are announcing new measures to put greater power in the hands of social media users themselves.

“People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms.”

Twitter does already offer verified users the ability to see a feed of replies only from other verified users. But the UK’s proposal looks set to go further — requiring all major platforms to add or expand such features, making them available to all users and offering a verification process for those who are willing to prove an ID in exchange for being able to maximize their reach.

DCMS said the law itself won’t stipulate specific verification methods — rather the regulator (Ofcom) will offer “guidance”.

“When it comes to verifying identities, some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify. Alternatively, verification could include people using a government-issued ID such as a passport to create or update an account,” the government suggests.

Ofcom, the oversight body which will be in charge of enforcing the Online Safety Bill, will set out guidance on how companies can fulfil the new “user verification duty” and the “verification options companies could use”, it adds.

“In developing this guidance, Ofcom must ensure that the possible verification measures are accessible to vulnerable users and consult with the Information Commissioner, as well as vulnerable adult users and technical experts,” DCMS also notes, with a tiny nod to the massive topic of privacy.

Digital rights groups will at least breathe a sign of relief that the UK isn’t pushing for a complete ban on anonymity, as some online safety campaigners have been urging.

When it comes to the tricky topic of online trolling, rather than going after abusive speech itself, the UK’s strategy hinges on putting potential limits on freedom of reach on mainstream platforms.

“Banning anonymity online entirely would negatively affect those who have positive online experiences or use it for their personal safety such as domestic abuse victims, activists living in authoritarian countries or young people exploring their sexuality,” DCMS writes, before going on to argue the new duty “will provide a better balance between empowering and protecting adults — particularly the vulnerable — while safeguarding freedom of expression online because it will not require any legal free speech to be removed”.

“While this will not prevent anonymous trolls posting abusive content in the first place — providing it is legal and does not contravene the platform’s terms and conditions — it will stop victims being exposed to it and give them more control over their online experience,” it also suggests.

Asked for thoughts on the government’s balancing act here, Neil Brown, an internet, telecoms and tech lawyer at Decoded Legal, wasn’t convinced on its approach’s consistency with human rights.

“I am sceptical that this proposal is consistent with the fundamental right ‘to receive and impart information and ideas without interference by public authority’, as enshrined in Article 10 Human Rights Act 1998,” he told TechCrunch. “Nowhere does it say that one’s right to impart information applies only if one has verified one’s identity to a government-mandated standard.

“While it would be lawful for a platform to choose to implement such an approach, compelling platforms to implement these measures seems to me to be of questionable legality.”

Under the government’s proposal, those who want to maximize their online visibility/reach would have to hand over an ID, or otherwise prove their identity to major platforms — and Brown also made the point that that could create a ‘two-tier system’ of online expression which might (say) serve the extrovert and/or obnoxious individual, while downgrading the visibility of those more cautious/risk-averse or otherwise vulnerable users who are justifiably wary of self-ID (and, probably, a lot less likely to be trolls anyway).

“Although the proposals stop short of requiring all users to hand over more personal details to social media sites, the outcome is that anyone who is unwilling, or unable, to verify themselves will become a second class user,” he suggested. “It appears that sites will be encouraged, or required, to let users block unverified people en masse.

“Those who are willing to spread bile or misinformation, or to harass, under their own names are unlikely to be affected, as the additional step of showing ID is unlikely to be a barrier to them.”

TechCrunch understands that the government’s proposal would mean that users of in-scope user-generated platforms who do not use their real name as their public-facing account identity (i.e. because they prefer to use a nickname or other moniker) would still be able to share (legal) views without limits on who would see their stuff — provided they had (privately) verified their identity with the platform in question.

Brown was a little more positive about this element of continuing to allow for pseudonymized public sharing.

But he also warned that plenty of people may still be too wary to trust their actual ID to platforms’ catch-all databases. (The outing of all sorts of viral anonymous bloggers over the years highlights motivations for shielded identities to leak.)

“This is marginally better than a ‘real names’ policy — where your verified name is made public — but only marginally so, because you still need to hand over ‘real’ identity documents to a website,” said Brown, adding: “I suspect that people who remain pseudonymous for their own protection will be rightly wary of the creation of these new, massive, datasets, which are likely to be attractive to hackers and rogue employees alike.”

User controls for content filtering

In a second new duty being added to the Bill, DCMS said it will also require category one platforms to provide users with tools that give them greater control over what they’re exposed to on the service.

“The bill will already force in-scope companies to remove illegal content such as child sexual abuse imagery, the promotion of suicide, hate crimes and incitement to terrorism. But there is a growing list of toxic content and behaviour on social media which falls below the threshold of a criminal offence but which still causes significant harm,” the government writes.

“This includes racist abuse, the promotion of self-harm and eating disorders, and dangerous anti-vaccine disinformation. Much of this is already expressly forbidden in social networks’ terms and conditions but too often it is allowed to stay up and is actively promoted to people via algorithms.”

“Under a second new duty, ‘category one’ companies will have to make tools available for their adult users to choose whether they want to be exposed to any legal but harmful content where it is tolerated on a platform,” DCMS adds.

“These tools could include new settings and functions which prevent users receiving recommendations about certain topics or place sensitivity screens over that content.”

Its press release gives the example of “content on the discussion of self-harm recovery” as something which may be “tolerated on a category one service but which a particular user may not want to see”.

Brown was more positive about this plan to require major platforms to offer a user-controlled content filter system — with the caveat that it would need to genuinely be user-controlled.

He also raised concerns about workability.

“I welcome the idea of the content filer system, so that people can have a degree of control over what they see when they access a social media site. However, this only works if users can choose what goes on their own personal blocking lists. And I am unsure how that would work in practice, as I doubt that automated content classification is sufficiently sophisticated,” he told us.

“When the government refers to ‘any legal but harmful content’, could I choose to block content with a particular political leaning, for example, that expounds an ideology which I consider harmful? Or is that anti-democratic (even though it is my choice to do so)?

“Could I demand to block all content which was in favour of COVID-19 vaccinations, if I consider that to be harmful? (I do not.)

“What about abusive or offensive comments from a politician? Or is it going to be a far more basic system, essentially letting users choose to block nudity, profanity, and whatever a platform determines to depict self-harm, or racism.”

“If it is to be left to platforms to define what the ‘certain topics’ are — or, worse, the government — it might be easier to achieve, technically. However, I wonder if providers will resort to overblocking, in an attempt to ensure that people do not see things which they have asked to be suppressed.”

An ongoing issue with assessing the Online Safety Bill is that huge swathes of specific details are simply not yet clear, given the government intends to push so much detail through via secondary legislation. And, again today, it noted that further details of the new duties will be set out in forthcoming Codes of Practice set out by Ofcom.

So, without far more practice specifics, it’s not really possible to properly understand practical impacts, such as how — literally — platforms may be able to or try to implement these mandates. What we’re left with is, mostly, government spin.

But spitballing off-of that spin, how might platforms generally approach a mandate to filter “legal but harmful content” topics?

One scenario — assuming the platforms themselves get to decide where to draw the ‘harm’ line — is, as Brown predicts, that they seize the opportunity to offer a massively vanilla ‘overblocked’ feed for those who opt in to exclude ‘harmful but legal’ content; in large part to shrink their legal risk and operational cost (NB: automation is super cheap and easy if you don’t have to worry about nuance or quality; just block anything you’re not 100% sure is 100% non-controversial!).

But they could also use overblocking as a manipulative tactic — with the ultimately goal of discouraging people from switching on such a massive level of censorship, and/or nudging them to return, voluntarily, to the non-filtered feed where the platform’s polarizing content algorithms have a fuller content spectrum to grab eyeballs and drive ad revenue… Step 3: Profit.

The kicker is platforms would have plausible deniability in this scenario — since they could simply argue the user themselves opted in to seeing harmful stuff! (Or at least didn’t opt out since they turned the filter off or else never used it.) Aka: ‘Can’t blame the AIs gov!’

Any data-driven algorithmically amplified harms would suddenly be off the hook. And online harm would become the user’s fault for not turning on the available high-tech sensitivity screen to shield themselves. Responsibility diverted.

Which, frankly, sounds like the sort of regulatory overside an adtech giant like Facebook could cheerfully get behind.

Still, platform giants face plenty of risk and burden from the full package of proposal coming at them from Dorries & co.

The secretary of state has also made no secret of how cheerful she’d be to lock up the likes of Mark Zuckerberg and Nick Clegg.

In addition to being required to proactively remove explicitly illegal content like terrorism and CSAM — under threat of massive fines and/or criminal liability for named execs — the Bill was recently expanded to mandate proactive takedowns of a much wider range of content, related to online drug and weapons dealing; people smuggling; revenge porn; fraud; promoting suicide; and inciting or controlling prostitution for gain.

So platforms will need to scan for and remove all that stuff, actively and up front, rather than acting after the fact on user reports as they’ve been used to (or not acting very much, as the case may be). Which really does upend their content business as usual.

DCMS also recently announced it would add new criminal communications offences to the bill too — saying it wanted to strengthen protections from “harmful online behaviours” such as coercive and controlling behaviour by domestic abusers; threats to rape, kill and inflict physical violence; and deliberately sharing dangerous disinformation about hoax COVID-19 treatments — further expanding the scope of content that platforms must be primed and on the lookout for.

So given the ever-expanding scope of the content scanning regime coming down the pipe for platforms — combined with tech giants’ unwillingness to properly resource human content moderation (since that would torch their profits) — it might actually be a whole lot easier for Zuck & co to switch to a single, super vanilla feed.

Make it cat pics and baby photos all the way down — and hope the eyeballs don’t roll away and the profits don’t drain away but Ofcom stays away… or something.





Source link