Swift retaliation: Fans strike back after explicit deepfakes flood X | TechCrunch
Nonconsensual deepfake porn of Taylor Swift went viral on X this week, with one post garnering more than 45 million views, 24,000 reposts and hundreds of thousands of likes before it was removed.
The pop star has one of the world’s most dedicated, extremely online, and incomprehensibly massive fanbases. Now, the Swifties are out for blood.
When mega-fandoms get organized, they’re capable of immense things, like when K-pop fans reserved hundreds of tickets to a Donald Trump rally in an attempt to tank attendance numbers. As the 2024 U.S. presidential election approaches, some pundits have even theorized about the power of Swifties as a voting bloc.
But today isn’t election day, and Swifties are focused on something more immediate: making the musician’s nonconsensual deepfakes as difficult to find as possible. Now, when you search terms like “taylor swift ai” or “taylor swift deepfake” on X, you’ll find thousands of posts from fans trying to bury the AI-generated content. On X, the phrase “PROTECT TAYLOR SWIFT” has been trending with over 36,000 posts.
Sometimes, these fandom-driven campaigns can cross a line. While some fans are encouraging each other to dox the X users who circulated the deepfakes, others worry about fighting harassment with more harassment, especially when the suspected perpetrator has a relatively common name, and in some cases, the Swifties could be going after the wrong guy. With so many thousands of fans taking part in the cause, it’s inevitable that not every Swiftie will be part of the same unified front — and some are more in touch with the “Reputation” era than others.
With the rise of accessible generative AI tools, this harassment tactic has become so widespread that last year, the FBI and international law enforcers issued a joint statement about the threat of sextortion. According to research from cybersecurity firm Deeptrace, about 96% of deepfakes are pornographic, and they almost always feature women.
“Deepfake pornography is a phenomenon that exclusively targets and harms women,” the report reads. This abuse has even seeped into schools, where underage girls have been targeted by their classmates with explicit, nonconsensual deepfakes. So, for some Taylor Swift fans, this isn’t just a matter of protecting the star. They realize that these attacks can happen to any of them, not just celebrities, and that they have to fight to set the precedent that this behavior is intolerable.
“She is taking the hit for us right now, all,” said a TikTok user named LeAnn in a video urging users to defend Swift. “In protecting her, you’re going to be protecting yourself, and your daughters.”
According to 404 Media, the images originated on a Telegram chat that’s dedicated to creating nonconsensual, explicit images of women using generative AI. The group directs its users to generate AI deepfakes on Microsoft’s Designer; though this kind of content violates Microsoft policy, its AI is still capable of creating it, and users have created simple workarounds to bypass basic safety tools.
Microsoft and X did not respond to request for comment before publication.
Congress is making some legislative headway to criminalize nonconsensual deepfakes. Virginia has banned deepfake revenge porn, and Representative Yvette Clarke (D-NY) recently reintroduced the DEEPFAKES Accountability Act, which she first proposed in 2019. While critics worry about the difficulty of legislating the dark corners of the web, some say the bill could at least institute some legal precedent of protection from this abuse. Swift’s fans also called attention to the failures of Ticketmaster, the entertainment mega-company that also owns Live Nation. In a particularly memorable statement, FTC chair Lina Khan said last year that the disastrous experience to buy tickets for Swift’s Eras tour “ended up converting more gen Z-ers into anti-monopolists overnight than anything I could have done.”
This abuse campaign is emblematic of the problems with AI’s steep ascent: companies are building too fast to properly assess the risks of the products they’re shipping. So, maybe Taylor Swift fans will take up the fight for thoughtful regulation of fast-developing AI products — but if it takes a mass harassment campaign against a celebrity for undertested AI models to face any sort of scrutiny, then that’s a whole other problem.