After sexually explicit deepfake images of Taylor Swift that were traced back to Microsoft’s AI tool began circulating on social media, raising the possibility of a lawsuit from the singer, the company began cracking down on the use of its AI program.
The fake photos displayed a nude Swift surrounded by Kansas City Chiefs players, in reference to her highly publicized relationship with NFL player Travis Kelce.
MIcrosoft’s AI program was soon identified as the source before the pornographic deepfakes were being shared on X, Reddit, and other websites, as reported by the site 404 Media on Monday.
X has since blocked any searches involving Swift from yielding results as a mode of “temporary action,” according to the company’s executive, Joe Benarroch.
The tech giant has since pushed an update to its tool, Designer, a text-to-image program powered by OpenAI’s Dall-E 3, adding “guardrails” to prevent the use of nonconsensual fake photos.
“We are investigating these reports and are taking appropriate action to address them,” a Microsoft spokesperson told 404 Media, which first reported on the Designer update. “We have large teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users.”
The spokesman noted that any Designer users who create deepfakes will lose access to the service, as per the company’s Code of Conduct.
Swift has not yet publicly commented on the explicit deepfake images.
Yet, this conflict of regulation on Microsoft’s AI programs is the latest in a string of misconduct and infringement controversies that could possibly lead to another lawsuit for the company.
In May of 2023, Democratic Rep. Joseph Morelle proposed the Preventing Deepfakes of Intimate Images Act, which is designed to make it illegal to share deepfake pornographic photos or content without consent, and would allow victims to sue the creators of such material while maintaining their anonymity. The bill was referred to the House Judiciary Committee, but no further action has been taken since the legislation was introduced eight month ago.
According to the State of Deepfakes report for 2023, over 95,000 deepfake videos were posted online last year, showing an increase of 550% since 2019. The report also recorded that 98% of deepfake videos online are pornography, and that 99% of the targeted individuals in these videos are women.
Microsoft’s update following the controversy comes after the CEO, Satya Nadella, said tech companies need to “move fast” to crack down on the misuse of artificial intelligence tools. Nadella described the spread of the fake pornographic images of Swift as “alarming and terrible.”
“We have to act. And quite frankly, all of us in the tech platform, irrespective of what your standing on any particular issue is,” he said, according to a transcript of an interview on NBC Nightly News set to air on Tuesday. “I don’t think anyone would want an online world that is completely not safe for both content creators and content consumers.”
The Swift deepfakes were removed from X after 17 hours of being up and reportedly being viewed more than 45 million times.