Elon Musk's X has blocked searches for Taylor Swift after sexually explicit images of the pop star created using artificial intelligence were widely shared on the platform.
The incident is the latest example of how social media groups are cracking down on so-called deepfakes: realistic images and audio generated using AI that can be misused to portray celebrities in compromising or misleading situations without their consent.
When searching for terms like “Taylor Swift” or “Taylor AI” on The change means even legitimate content about one of the world's most popular stars will be harder to see on the site.
“This is a temporary measure and is being undertaken with the utmost caution as we prioritize safety in this matter,” said Joe Benarroch, head of business operations at X.
Swift has not publicly commented on the matter.
X was bought for $44 billion in October 2022 by billionaire entrepreneur Musk, who has cut resources for content monitoring and relaxed his moderation policies, citing his free speech ideals.
This weekend's deployment of the blunt moderation mechanism comes as X and its competitors Meta, TikTok and Google's YouTube face increasing pressure to crack down on the misuse of increasingly realistic and accessible deepfake technology. A thriving market has emerged for tools that allow anyone to use generative AI to create a video or image in the image of a celebrity or politician in just a few clicks.
Although deepfake technology has been available for several years, recent advances in generative AI have made creating the images easier and more realistic. Experts warn that fake pornographic images are one of the most common emerging abuses of deepfake technology, also pointing to its increasing use in political disinformation campaigns during an election year around the world.
In response to a question about the Swift images on Friday, White House press secretary Karine Jean-Pierre said the spread of the false images was “alarming,” adding: “While social media companies, in their opinion, do their own independent “They play an important role in enforcing their own rules.” She called on Congress to legislate in this regard.
On Wednesday, social media executives including Linda Yaccarino of
On Friday, an official security account from X said: opinion that posting “images of non-consensual nudity (NCN)” is “strictly prohibited” on the platform, which has a “zero tolerance policy towards such content.”
It added: “Our teams are actively removing all identified images and taking appropriate action against the accounts responsible for publishing these images.” We are closely monitoring the situation to ensure any further violations are addressed immediately and the content removed. “
However, the depleted resources of
A report from technology news site 404 Media found that the images appeared to have come from the anonymous message board 4chan and a group on the messaging app Telegram dedicated to sharing abusive, AI-generated images of women, often using a Microsoft tool were created.
Microsoft said it was still investigating the images but had “strengthened our existing security systems to prevent our services from being used to generate such images.”
Telegram did not immediately respond to requests for comment.