Security firms caution that the attack method may expand beyond video and audio as artificial intelligence (AI) powered deepfake schemes become more prevalent.
In the second quarter of 2024, malicious actors have intensified their operations by employing AI-powered deepfake schemes to defraud crypto holders, according to a software firm Gen Digital report on Sept. 4.
The company stated that the fraudster group “CryptoCore” had already defrauded more than $5 million in cryptocurrency through AI deepfakes.
Although the sum appears modest compared to other attack methods in the crypto space, security professionals believe that AI deepfake attacks can expand further, thereby endangering the security of digital assets.
AI Deepfakes Pose threat to wallet security
CertiK, a security firm specializing in web3 technology, anticipates deepfake schemes powered by AI will become increasingly sophisticated.
“A spokesperson for CertiK informed Cointelegraph that the company has the potential to expand beyond audio and video recordings in the future.”
The spokesperson clarified that the attack vector could be employed to deceive wallets that use facial recognition to grant hackers access:
“For instance, if a wallet relies on facial recognition to secure critical information, it must evaluate the robustness of its solution against AI-driven threats.”
According to the spokesperson, crypto community members must develop a greater understanding of the mechanics of this attack.
AI deepfakes will persist in the crypto industry
Luis Corrons, a security evangelist for Norton, anticipates that AI-powered assaults will persist in their pursuit of cryptocurrency holders.
Corrons observed that cryptocurrency generates substantial financial benefits and mitigates the likelihood of cyberattacks. He stated:
“Cryptocurrency transactions are frequently anonymous and highly valued, making them a more appealing target for cybercriminals. Successful attacks result in more substantial financial rewards and a reduced risk of detection.”
Additionally, Corrons stated that the absence of regulations in the crypto sector provides cybercriminals with fewer legal repercussions and more opportunities to launch attacks.
AI-powered deepfake attacks: A Guide to Detection
Although AI-powered attacks pose a significant threat to crypto users, security specialists believe that there are methods by which users can safeguard themselves from this type of threat. Education is an appropriate starting point, according to a spokesperson for CertiK.
According to a CertiK engineer, knowing the hazards, tools, and services available to combat them is crucial. Furthermore, the professional emphasized the importance of being cautious of unsolicited requests. They stated:
“Being skeptical of unsolicited requests for money or personal information is crucial, and enabling multifactor authentication for sensitive accounts can help add an extra layer of protection against such scams.”
In the interim, Corrons thinks there are “red flags” that users can attempt to identify to prevent AI deepfake frauds. This encompasses unnatural bodily movements, facial expressions, and eye movements.
Additionally, a shortage of emotion may serve as a significant indicator. “If an individual’s facial expressions do not appear to correspond with their intended message, it is possible to identify facial morphing or image stitches,” Corrons clarified.
In addition to these factors, the executive stated that users should be able to discern whether they view an AI deepfake by observing the audio’s inconsistencies, misalignments, and unusual body shapes.