The US, EU, and UK sign the world’s first legally binding international AI treaty, prioritizing human rights and accountability in AI regulation.
The global community has advanced towards aligning its goals and values concerning artificial intelligence (AI) with the signing of the Council of Europe’s AI convention.
The United States, European Union, and the United Kingdom are expected to sign the convention on September 5.
A Milestone in AI Regulation
The convention will be the world’s first legally binding international treaty on AI. It holds signatories accountable for any harm or discrimination resulting from AI systems.
It mandates that the outputs of these systems uphold citizens’ rights to equality and privacy, and provides legal recourse for victims of AI-related rights violations.
However, enforcement mechanisms such as fines are not yet established, and compliance is currently monitored rather than actively enforced.
UK’s Minister for Science, Innovation, and Technology, Peter Kyle, described the signing as an “important” global milestone, stating, “The fact that we hope such a diverse group of nations is going to sign up to this treaty shows that actually, we are rising as a global community to the challenges posed by AI.”
Drafted over the past two years with input from more than 50 countries, including Canada, Israel, Japan, and Australia, this treaty represents a pioneering step in international AI regulation.
Despite this progress, individual nations have been pursuing their own localized AI regulations.
Regional and National AI Regulations
In the summer, the EU enacted the AI Act, which became effective on August 1. This legislation introduces extensive regulations for AI, particularly targeting high-level systems with substantial computing power.
While intended to ensure safety, AI developers have criticized the Act for potentially hindering innovation.
For instance, Meta, the company behind one of the largest language models, Llama2, has halted the rollout of its latest products in the EU due to these new regulations.
Additionally, tech companies united in August to request more time to comply with the EU regulations.
A national AI regulatory framework in the United States has not yet been established. However, the Biden Administration has set up committees and task forces focused on AI safety.
Meanwhile, legislators in California are actively working on AI regulations.
Recently, two bills have advanced through the State Assembly and are awaiting a decision from Governor Gavin Newsom.
One bill addresses the unauthorized creation of AI-generated digital replicas of deceased individuals, while the other, which has faced opposition from major AI developers, mandates safety testing for advanced AI models and the inclusion of a “kill switch” for such models.
California’s AI regulations are significant given the state’s status as the base for leading tech developers like OpenAI, Meta, and Alphabet (Google).