Legislators Propose No AI FRAUD Act in Response to Taylor Swift Deepfake Scandal

Image Source: Billboard

Last week, the internet was flooded with nonconsensual, AI-generated deepfake pornography featuring Taylor Swift, sparking outrage from the singer’s fanbase, concern from the White House, and a renewed push for legislative action. The incident has prompted lawmakers, led by a bipartisan group including Rep. María Elvira Salazar, to introduce the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act.

The explicit images rapidly spread across social media platforms, prompting swift action from X, Elon Musk’s social media platform. Despite efforts to suspend accounts and block searches for Swift’s name, the images continued to circulate on other platforms. The incident has reignited the debate on whether U.S. citizens should be federally protected against AI abuse.

White House Backs No AI FRAUD Act Against Deepfake Threats

Image Source: Billboard

White House Press Secretary Karine Jean-Pierre expressed alarm at the circulation of these false images, emphasizing the need for action. The No AI FRAUD Act aims to establish a federal framework protecting individuals’ rights to their likeness and voice against AI-generated forgeries. The bill proposes federal jurisdiction to empower individuals to enforce this right and balance it against First Amendment protections.

The legislation, if passed, would reaffirm the protection of everyone’s likeness and voice, granting individuals control over the use of their identifying characteristics. It would also provide mechanisms for individuals to take legal action against those who facilitate, create, and spread AI fraud without consent. The bill aims to strike a balance between protecting individual rights and safeguarding speech and innovation under the First Amendment.

Lawmakers hope that the high-profile case involving Taylor Swift will garner support for the No AI FRAUD Act. The legislation addresses the specific issue of AI-generated deepfake pornography, which is considered a form of image-based sexual abuse. Rep. Madeleine Dean emphasized the urgency of creating protections against harmful AI in the rapidly evolving landscape of artificial intelligence.

While 17 states have enacted 29 bills related to regulating artificial intelligence since 2019, the variety of language and distinctions in these laws has allowed for inconsistencies and gaps, particularly regarding pornographic deepfakes. The proposed federal law seeks to provide a comprehensive solution to the issue, filling potential legal loopholes.

Image Source: NDTV

Using Taylor Swift’s case as an example, the legislative landscape at the state level varies. Some states lack explicit laws addressing deepfake porn, while others, like New York, have implemented criminal and civil options for victims. The No AI FRAUD Act aims to provide a unified and robust federal response to the challenges posed by AI abuse, with the hope of preventing further incidents like the one involving Taylor Swift.

You May Also Like

More From Author

+ There are no comments

Add yours