Texas firm Life Corporation and Walter Monk will face a lawsuit for allegedly using an AI-generated robocall that used the voice of United States President Joe Biden to advise New Hampshire residents not to vote in the January 23 primary election.
Attorney General John Formella stated in an announcement from the New Hampshire Department of Justice that the Energy Law Unit of the Attorney General’s Office identified the source as Life Corporation, a Texas-based firm, and an individual named Walter Monk.
To interfere with the 2024 presidential election, an artificial intelligence (AI) deepfake tool generated the automated messages. Advising New Hampshire electors to disregard the robocalls, the state attorney general’s office labeled them false information.
AI deepfake tools are software or applications that generate highly realistic and deceptive digital content, including images, audio recordings, and videos, using sophisticated AI algorithms.
In collaboration with state and federal partners, including the Federal Communications Commission Enforcement Bureau and the Anti-Robocall Multistate Litigation Task Force, the state attorney general of New Hampshire identified voter suppression calls in mid-January and launched an investigation.
In violation of Title LXIII of the 2022 New Hampshire Revised Statutes about bribery, intimidation, and suppression, the Election Law Unit issued a cease-and-desist order to Life Corporation.
Constant adherence is required per the order, and the unit retains the prerogative to administer further disciplinary measures in light of previous behavior.
Inspecting the conversations, Election Law Unit agents traced them to Lingo Telecom, a telecommunications provider based in Texas. Concurrently, the U.S. Federal Communications Commission issued Lingo Telecom a cease-and-desist letter, alleging that the company cloned voices generated by artificial intelligence in robocalls. The correspondence mandates an urgent cessation of assistance for illicit robocall trafficking.
Chairwoman of the FCC Jessica Rosenworcel proposed on January 31 to classify communications containing AI-generated voices as unlawful, thereby subjecting them to the penalties and regulations specified in the Telephone Consumer Protection Act.
Deepfakes have exacerbated concerns regarding AI-generated content; in its 19th Global Risks Report, the World Economic Forum outlined the negative consequences of AI technologies.
The Canadian Security Intelligence Service, Canada’s principal national intelligence agency, has expressed apprehension regarding disseminating false information via artificial intelligence deepfakes on the internet.