Taylor Swift Gets Boost in Battle Against Explicit AI Pictures
Connect with us


Taylor Swift Gets Boost in Battle Against Explicit AI Pictures



Taylor Swift has received a boost in the wake of sexually explicit AI images portraying her being circulated on social media, with a powerhouse attorney offering to represent her should she take legal action.

AI images are pictures generated through artificial intelligence software using a text prompt. This can be done without a person’s consent. Among the AI images of Swift being shared are some that depict her posing inappropriately while at a Kansas City Chiefs game. The pop star has attended several of the NFL team’s games this season amid her romance with Chiefs tight end Travis Kelce.

The offensive AI images originated on the AI celebrity porn website Celeb Jihad on January 15. They were subsequently shared on X, formerly Twitter, this week, clocking up millions of views before the associated accounts were suspended.

On Monday, lewd AI images portraying Swift were posted by X account @FloridaPigMan. They have since been removed for violating the social media platform’s rules. Another sexually explicit fake photo of Swift was posted on the website Rule 34 on November 21, 2023. It now appears to have been removed.

Taylor Swift attends the 65th GRAMMY Awards on February 05, 2023 in Los Angeles, California. The star has received a wave of support after AI images depicting her in a sexually explicit manner were shared…

The legal system has not caught up with this surfacing threat of such images flooding publicly accessible spaces—but that could eventually change. On Tuesday, two lawmakers reintroduced a bill that would make the non-consensual sharing of digitally altered pornographic images a federal crime.

Representative Joseph Morelle, a Democrat from New York, first authored the “Preventing Deepfakes of Intimate Images Act” in May 2023, which he said at the time was created “to protect the right to privacy online amid a rise of artificial intelligence and digitally-manipulated content.” He has since added Representative Tom Kean, a Republican from New Jersey, as a co-sponsor.


Swift’s fans, who dub themselves Swifties, took action on their own before the images were deleted from X. In an effort to flood out the images and make them difficult to find, many fans posted related keywords, adding the words, “Protect Taylor Swift.”

Amid the ongoing furor over the images, New York City-based victims’ rights attorney Carrie Goldberg has offered her services to Swift.

Responding to a post on X that stated Swift is considering legal action, Goldberg responded: “My resumé:
— have sued malicious platforms out of existence (RIP Omegle)
— overcame Section 230 4x
— participated in WH [White House] executive order on AI that included deepfakes
— Was ‘expert’ for launch of WH Task Force against Online Harms.”

Showing that she’s also a fan of Swift’s chart-topping work, Goldberg concluded that she has “run 5 miles w/ Anti-hero on repeat.”

Goldberg is also author of the book Nobody’s Victim: Fighting Psychos, Stalkers, Perv, and Trolls.

Newsweek has contacted representatives of Swift and Goldberg via email for comment.

Senator Mark Warner, a Democrat from Virginia and chairman of the Senate Intelligence Committee, also weighed in on the matter on Tuesday as he shared Newsweek’s article on the outrage the images had sparked.

“I’ve repeatedly warned that AI could be used to generate non-consensual intimate imagery,” he wrote on X. “This is a deplorable situation, and I’m going to keep pushing on AI companies to stop this horrible capability and on platforms to stop their circulation.”


In another post hours later regarding Swift’s reported legal plans, Warner noted that “current law may insulate platforms and websites from exactly this sort of accountability. I want to pass Section 230 reform so we can hold tech firms accountable for allowing this disgusting content to proliferate.”

Passed in 1996, Section 230 of the Communications Decency Act offers a degree of immunity to websites for any content uploaded by third parties—be it in the form of a social media post, classified ad or user review.

It gives sites acting in “good faith” the protection to remove any objectionable material, regardless of whether it is “constitutionally protected.” The law is broad, saying that the rules apply to any “provider or user of an interactive computer service.”


Section 230 does not offer a total blanket protection, however, and exemptions are in place meaning lawsuits are possible for criminal and intellectual property cases.

Discussing moves to make images such as the ones depicting Swift illegal, Morelle this week told Newsweek that the AI creations were part of a wider trend.

“Intimate deepfake images like those targeting Taylor Swift are disturbing, and sadly, they’re becoming more and more pervasive across the internet. I’m appalled this type of sexual exploitation isn’t a federal crime—which is why I introduced legislation to finally make it one,” Morelle said.

“The images may be fake, but their impacts are very real. Deepfakes are happening every day to women everywhere in our increasingly digital world, and it’s time to put a stop to them. My legislation would be a critical step forward in regulating AI and ensuring there are consequences for those who create deepfake images.”