Technology

Meta has a new scam ads problem down under – TechCrunch


Australia’s Competition and Consumer Commission (ACCC) has instigated proceedings against Facebook owner Meta for allowing the spread of scam ads on its platforms and — it alleges — not taking sufficient steps to tackle the issue.

The watchdog said today that it’s seeking “declarations, injunctions, penalties, costs and other orders” against the social media giant, accusing it of engaging in “false, misleading or deceptive conduct” by publishing scam advertisements featuring prominent Australian public figures — activity the ACCC asserts breaches local consumer laws.

Specifically, it alleges Meta’s conduct is in breach of the Australian Consumer Law (ACL) or the Australian Securities and Investments Commission Act (ASIC Act).

The regulator’s accusation extends to alleging that Meta “aided and abetted or was knowingly concerned in false or misleading conduct and representations by the advertisers” (i.e. who used its platform to net victims for their scams).

Meta refutes the accusations, saying it already uses technology to try to detect and block scams.

In a statement on the ACCC’s action attributed to a company spokesperson, the tech giant said:

“We don’t want ads seeking to scam people out of money or mislead people on Facebook — they violate our policies and are not good for our community. We use technology to detect and block scam ads and work to get ahead of scammers’ attempts to evade our detection systems. We’ve cooperated with the ACCC’s investigation into this matter to date. We will review the recent filing by the ACCC and intend to defend the proceedings. We are unable to further comment on the detail of the case as it is before the Federal Court.”

The ACCC says the scam ads it’s taking action over promoted cryptocurrency investment or money-making schemes via Meta’s platforms, and featured people likely to be well known to Australians — such as businessman Dick Smith, TV presenter David Koch and former NSW Premier Mike Baird — who could be seen in the ads apparently endorsing the schemes, yet, in actuality, these public figures had never approved or endorsed the messaging.

“The ads contained links which took Facebook users to a fake media article that included quotes attributed to the public figure featured in the ad endorsing a cryptocurrency or money-making scheme. Users were then invited to sign up and were subsequently contacted by scammers who used high pressure tactics, such as repeated phone calls, to convince users to deposit funds into the fake scheme,” it explains.

The ACCC also notes that celebrity endorsement cryptocurrency scam ads continued being displayed on Facebook in Australia after public figures elsewhere around the world had complained that their names and images had been used in similar ads without their consent.

A similar complaint was pressed against Facebook in the UK back in 2018 — when local consumer advice personality, Martin Lewis, sued the platform for defamation over a flood of scams ads bearing his image and name without his permission which he said were being used to trick and defraud UK consumers.

Lewis ended that litigation against Facebook in 2019 after it agreed to make some changes to its platform locally — including adding a button to report scam ads. (A Facebook misleading and scam ads reporting form was subsequently also made available by the company in Australia, the Netherlands, and New Zealand.)

Despite ending his suit, Lewis did not end his campaigning against scam ads — most recently (successfully) pressing for draft UK Online Safety legislation, which was introduced to the country’s parliament yesterday, to be extended to bring scam ads into scope. That incoming regime will include fines of up to 10% of global annual turnover to encourage tech giants to comply.

Australia, meanwhile, legislated on Online Safety last year — with its own similarly titled Act coming into force this January. However its online safety legislation is narrower, focused on other types of abusive content (such as CSAM, terrorism, cyberbullying etc).

For pursuing online platforms on the scam ads issue, the country is relying on existing consumer and financial investment rules.

It remains to be seen whether these laws are specific enough to be successfully used to force a change in Meta’s conduct around ads. 

The adtech giant makes its money from profiling people to serve targeted advertising. Any limits on how its ad business can operate — such as requirements to manually review all ads before posting and/or limitations on its ability to target ads at eyeballs — would significantly ramp up its costs and threat its ability to generate so much revenue.

So it’s notable that the ACCC does appear to be eyeing orders for such types of measures — suggesting, for example, that Meta’s targeting tools are exacerbating the scam ads issue by enabling scammers to target people who are “most likely to click on the link in an ad “– assuming, of course, that it prevails in its proceeding.

That looks like the most interesting element of the proceeding — if the ACCC ends up digging into how scammers are able to use Facebook’s ad tools to amplify the effectiveness of their scams.

In Europe, wider moves are already afoot to put legal limits on platforms’ ability to run tracking ads. While Meta has been warning its investors of “regulatory headwinds” impacting its ad business.

“The essence of our case is that Meta is responsible for these ads that it publishes on its platform,” ACCC chair Rod Sims wrote in a statement. “It is a key part of Meta’s business to enable advertisers to target users who are most likely to click on the link in an ad to visit the ad’s landing page, using Facebook algorithms. Those visits to landing pages from ads generate substantial revenue for Facebook.

“We allege that the technology of Meta enabled these ads to be targeted to users most likely to engage with the ads, that Meta assured its users it would detect and prevent spam and promote safety on Facebook but it failed to prevent the publication of other similar celebrity endorsement cryptocurrency scam ads on its pages or warn users.”

“Meta should have been doing more to detect and then remove false or misleading ads on Facebook, to prevent consumers from falling victim to ruthless scammers,” he added.

Sims also pointed out that in addition to “untold losses to consumers” — in one case the ACCC said a consumer lost $650,000 to a scam advertised as an investment opportunity on Facebook — scam ads damage the reputation of public figures falsely associated with them, reiterating that Meta has failed to take “sufficient steps” to stop fake ads featuring public figures, even after the public figures had reported to it that their name and image were being featured in celebrity endorsement cryptocurrency scam ads.

The idea that a technology platform which — over a full decade ago! — was able to deploy facial recognition on its platform for autotagging users in photo uploads would be unable to successfully apply the same sort of tech to automatically flag-for-review all ads bearing certain names and faces — after, or even before, a public figure reported a concern — looks highly questionable.

And while Meta claims that “cloaking” is one technique spammers use to try to workaround its review processes — aka, presenting different content to Facebook users and Facebook crawlers or tools — that is also the exact kind of technology problem you’d imagine a tech giant would be able to deploy its vast engineering resources to crack.

It’s certainly telling that in the four or so years since Lewis’ scam ads litigation the exact same playbook can apparently still be being successfully deployed by scammers through Facebook’s platform all around the world. If this is success, one has to wonder what Meta failing would look like.

How many scam ads Meta is ‘successfully’ removing is not at all clear.

In a section of its self-styled Community Standard Enforcement Report — that’s labelled “spam” (NB: not scams; and also where spam is functioning as a catch all (and self-defined) term, meaning it does not exclusively refer to problematic stuff that appears in ads specifically, let alone scams in ads) — Meta writes that “1.2 billion” is the figure for “content actioned on spam” in the three months of Q4.

This figure is all but meaningless since Meta gets to define what constitutes a single piece of “spam” for the purposes of its “transparency” reporting, as the company itself concedes in the report — hence the tortuous phrasing (“content actioned on spam”, not even pieces, or indeed ads, photos, posts etc). It also of course gets to define what spam is in this context — apparently bundling scam ads into that far fuzzier category too.

Furthermore, in the report, Meta doesn’t even write that 1.2BN refers to 1.2BN pieces of spam. (In any case, as noted above, a ‘piece’ of spam — in Meta’s universe — might actually refer to several pieces of content which it has decided to bundle up and count as one unit for public reporting purposes, such as multiple photos and text posts, as it also discloses in the report, which essentially means it can use a show of transparency to further obscure what’s actually happening on its platform.)

There’s more too: The term “actioned” — yet another self-serving bit of Meta nomenclature — does not necessarily mean that the (in this case “spam“) content got removed. That’s because it bundles a number of other potential responses, such as screening content with a warning or disabling accounts.

So — tl;dr — as ever with big adtech, it’s impossible to trust platforms’ self reported actions around the content they’re busy amplifying and monetizing — absent explicit legislative requirements mandating exactly what data points they must disclose to regulators in order to ensure actual oversight and genuine accountability.



Source link