Legislators in the United States are compelling action to criminalize the creation of deepfake images in response to the extensive dissemination of explicit dummy photographs featuring Taylor Swift on multiple social media platforms, including X and Telegram.
Representative Joe Morelle of the United States vehemently condemned the distribution of the images in a social media post on the X platform, where he used the word “appalling.”
Aiming to criminalize non-consensual deepfakes at the federal level, he advocated for immediate action while highlighting the Preventing Deepfakes of Intimate Images Act, which he authored.
By altering the visage or body of a person, deepfakes produce manipulated videos through artificial intelligence (AI). Legislators are attempting to address the issue despite the absence of federal legislation about creating or disseminating deepfake images.
Rep. Yvette D. Clarke, a Democrat, stated on X social media platform that the Taylor Swift situation is not novel. She emphasized that for years, women have been vulnerable to this technology and that, due to advancements in artificial intelligence, the creation of deepfakes has become more accessible and affordable.
X stated that it proactively eliminates the images and prosecutes the accounts accountable for their dissemination. Assuring content removal and promptly addressing additional infractions, the platform closely monitors the situation.
As of 2023, the Underground Safety Act of the United Kingdom prohibited the dissemination of deepfake pornography. As reported in the State of Deepfakes report from the previous year, a significant proportion of deepfake content (approximately 99 percent) comprises pornographic material, and the target demographic consists primarily of women.
The World Economic Forum’s (19th) Global Risks Report emphasized the unfavorable effects of AI technologies, including generative AI, in its 19th Global Risks Report.
These adverse effects may be intentional or unintentional and affect economies, businesses, individuals, and ecosystems. As a result, concerns have grown regarding AI-generated content.
The Canadian Security Intelligence Service (CSIS), the country’s principal national intelligence agency, has also voiced concern regarding internet-based disinformation campaigns that employ deepfakes generated by artificial intelligence.
On June 12, the United Nations issued a report emphasizing AI-generated media as a substantial and urgent menace to the integrity of information, with a particular focus on social media.
According to the United Nations, the risk of online disinformation has escalated in tandem with accelerated technological advancements, particularly in generative artificial intelligence, with a specific emphasis on deepfakes.