David Schwartz “started to worry” about the future of AI when he thought about the possibility of deepfakes and how they could affect social, economic, and political systems.
AI will be able to broadcast terrorist attacks that never happened, according to the CTO of Ripple.
The chief technology officer and one of the people who helped design the technical side of Ripple Ledger (XRPL) took to Twitter to talk about his worries about artificial intelligence technology (AI).
He said that he had never thought that developing AI could be dangerous. But just recently, he started to worry about deepfake streams, which are one of the most impressive ways that AI is being used.
In 20 years, he says, $1 million will be enough money to start “dozens” of video streams from events that aren’t happening. Each of these streams will be “interactive,” and there will be enough of them to make a lot of proof of fake public events.
For example, deepfake videos about terrorist attacks could be seen by people all over the world. Eventually, Ripple technologies might be able to change the undisputed facts of politics. This means that using AI could make things a lot worse than they are now.
Deepfakes happen a lot in crypto.
Mr. Schwartz is also worried about the role of CBDCs in the growth of cryptocurrency and the growth of modern economies.
Ripple is involved in many CBDC projects around the world, which U.Today has talked about before.
Millions of dollars have already been lost because of deepfake operations in Web3. In 2020, thieves used Justin Sun’s AI-made avatar to pretend to be the founder of Tron (TRX) and steal money from investors.
Most crypto scams on TikTok, YouTube, and Instagram use deepfakes of Elon Musk, Changpeng “CZ” Zhao, and Vitalik Buterin.