To safeguard all consumers, the United States Federal Trade Commission (FTC) is attempting to amend a regulation prohibiting artificial intelligence (AI) from impersonating businesses or government agencies, citing the growing threat of deepfakes.
After the FTC considers public feedback and final language, the revised regulation may prohibit generative artificial intelligence (GenAI) platforms from offering impersonation-related products or services that they know could cause harm to consumers.
As stated in a press release by FTC Chair Lina Khan:
“With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever. Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”
The revised regulation on government and business impersonation by the FTC grants the agency the authority to directly file federal court cases to compel con artists to reimburse funds obtained by impersonating government or business entities.
Following thirty days of publication in the Federal Register, the final rule regarding impersonation of government and business entities will take effect. The supplemental notice of proposed rulemaking will be subject to a public comment period for sixty days after its publication in the Federal Register, including instructions on submitting comments.
Deepfakes employ artificial intelligence to generate manipulated videos by modifying an individual’s visage or body. Legislators are attempting to address the issue despite the absence of federal legislation about creating or disseminating deepfake images.
Theoretically, individuals and celebrities who fall prey to deepfakes may pursue legal recourse through established channels, including copyright laws, rights associated with their likeness, and various torts (such as intentional infliction of emotional distress or invasion of privacy). On the other hand, cases involving these multiple statutes can be time-consuming and laborious to pursue.
The Federal Communications Commission prohibited AI-generated robocalls on January 31 by reinterpreting a rule that forbids nuisance messages produced by pre-recorded or artificial voices.
This action followed a phone campaign in New Hampshire that attempted to dissuade individuals from voting by using a deepfake of President Joe Biden. States nationwide have enacted legislation criminalizing deepfakes in the absence of congressional intervention.