According to a minister of the opposition party in the UK, artificial intelligence (AI) developers should be licensed and regulated similarly to the pharmaceutical, medical, and nuclear industries.
Lucy Powell, a politician and digital spokesman for the Labour Party in the United Kingdom, told The Guardian on June 5 that companies such as OpenAI and Google that have created AI models should “need a license to build these models,” adding:
“My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed or how they are controlled.”
Powell argued that regulating the development of specific technologies is preferable to prohibit them outright, similar to how the European Union banned facial recognition tools.
She added that AI “can have many unintended consequences,” but the government could mitigate some risks if developers were compelled to disclose their AI training models and datasets.
“This technology is evolving so rapidly that a proactive, interventionist government approach is required as opposed to a laissez-faire approach,” she said.
Powell also believes that such advanced technology could significantly influence the British economy, and the Labour Party is reportedly finalizing its policies on artificial intelligence and related technologies.
Next week, Labour leader Keir Starmer intends to convene the party’s shadow cabinet at Google’s U.K. facilities to speak with AI-focused executives.
Matt Clifford, chief of the Advanced Research and Invention Agency, the government’s research agency established in February, warned on TalkTV on June 5 that AI could threaten humans within two years.
In two years, “if we don’t start thinking about how to regulate and think about safety now,” he said, “we will have very powerful systems.” Clifford clarified, however, that a two-year timeline is “extremely optimistic.”
Clifford emphasized that current AI tools could be utilized to “launch large-scale cyber attacks.” OpenAI has contributed $1 million to support AI-assisted cybersecurity technology to combat such activities.
“I believe there are numerous potential scenarios to be concerned about,” he said. I believe it should occupy a prominent position on the agendas of policymakers.