‘Taylor Swift AI’ became a trending topic in some parts of the world; one post reached 45 million views before being removed. X (previously Twitter) has seen the recent dissemination of sexually explicit AI-generated photos of Taylor Swift, adding to the growing problem of AI-generated false pornography and the difficulty in preventing its further distribution.
X’s infamous incident went viral with over 45 million views, 24,000 retweets, and many likes and bookmarks. Unfortunately, the verified user responsible for sharing these photographs had their account suspended for breaking platform policy. Before being removed, the post had been visible on the website for about 17 hours.
However, the photos started to circulate and were reposted on other accounts as users started to talk about the popular post. A flood of new graphic imposters has emerged since then, and many of the old ones are still up. “Taylor Swift AI” became a hot subject in several countries, which helped spread the images to more people.
These sexual AI-generated photographs of women, typically created with Microsoft Designer, may have emerged in a Telegram channel, according to 404 Media’s research. Reportedly, group members made light of the fact that Swift’s images went viral on X in jest.
There is an outright prohibition on the hosting of synthetic and altered media as well as nonconsensual nudity in X’s policies. About 24 hours after the incident started, X released the following public statement, in which it avoided directly addressing the Swift photographs.
Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely…
— Safety (@Safety) January 26, 2024
Many of the Swift fan posts have remained up for very some time, and X has been chastised for that. In anger, Swift’s legit followers have taken to deluge the hashtags used to share the photos with notes praising her performances to dispel the graphic fakes.
The present issue highlights the serious problem of combating deepfake pornographic and AI-generated photos of actual people. While some artificial intelligence picture generators do indeed have safeguards in place to avoid the creation of naked, pornographic, or lifelike photographs of famous people, the vast majority do not. Even under ideal conditions, social media sites like X, which has severely limited its moderation skills, find it exceedingly challenging to prevent the propagation of false photos.
Following the discovery of false information regarding the Israel-Hamas conflict, the European Union has begun an investigation into the company on suspicion that it is being used to “disseminate illegal content and disinformation.” The rumor mill has it that the firm is also being questioned about its crisis protocols.