The United States Federal Communications Commission (FCC) has imposed a $1 million sanction on Texas-based Lingo Telecom for its involvement in the illegal Biden deepfake scam.
The scheme entailed disseminating a robocall containing an artificial intelligence-generated recording of President Joe Biden’s voice to discourage individuals from voting in the New Hampshire primary election in January.
Crackdown by the FCC
As per a press release issued by the Federal Communications Commission (FCC), the $1 million fine is not only a punitive measure but also a step toward ensuring that telecommunications companies are held accountable for the content they permit to be disseminated through their networks.
In addition to the monetary penalty, the FCC has mandated that Lingo Telecom implement a “historic compliance plan.” This strategy involves the stringent observance of the FCC’s caller ID authentication regulations, intended to prevent fraud and deception in this instance.
Further, Lingo Telecom must adhere to the “Know Your Customer” and “Know Your Upstream Provider” principles, which are essential for phone carriers to monitor call traffic effectively and guarantee that all calls are authenticated.
Deepfakes threaten democratic procedures
Steve Kramer, a political consultant, was responsible for orchestrating the robocalls, which were part of a more extensive endeavour to disrupt the New Hampshire primary election. The calls aimed to manipulate and intimidate voters, thereby subverting the democratic process, by utilizing AI technology to create a convincing imitation of Biden’s voice.
On May 23, Kramer was indicted for his involvement in the robocalls. Kramer was indicted for impersonating a candidate during the Democratic Party primary election in New Hampshire while employed by a rival candidate, Dean Phillips.
The utilization of deepfake technology in this fraud is particularly alarming, as it represents a novel and unsettling development in the ongoing battle against disinformation.
Deepfakes, which employ artificial intelligence to produce audio or video recordings that are both highly realistic and fraudulent, pose a significant threat to the integrity of democratic processes.
Cointelegraph illuminated the growing issue of AI-generated deepfakes in the ongoing election cycle in March, emphasizing the urgent necessity for electors to differentiate between fact and fiction.
In February, a consortium of 20 prominent AI technology companies pledged to prevent using their software to influence electoral outcomes.