Reuters reports that the Group of Seven (G7) industrial nations will agree on an artificial intelligence (AI) code of conduct for developers to reference for mitigating risks and benefits of the technology on October 30.
According to the report, the code consists of eleven points that seek to promote “safe, secure, and trustworthy AI worldwide” and help “capture” the benefits of AI while addressing and mitigating its risks.
G7 leaders drafted the proposal in September. It provides voluntary action guidance to “organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.”
In addition, it suggests that firms should publish reports on the capabilities, limitations, use, and exploitation of the systems they are developing. Also recommended are robust security controls for the systems in question.
Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union are G7 participants.
On April 29 and 30, all participating Digital and Tech Ministers met in Hiroshima, Japan, in conjunction with the G7 summit.
Emerging technologies, digital infrastructure, and AI were discussed at the meeting, with an agenda item explicitly devoted to responsible AI and global AI governance.
The G7’s code of conduct for AI arrives when governments worldwide are attempting to navigate the emergence of AI, with its useful capabilities and potential risks. The EU was among the first governing bodies to establish guidelines with the passage of the landmark EU AI Act’s first draft in June.
The United Nations established a 39-member advisory committee on October 26 to address global AI regulation issues.
In August, the Chinese government also enacted its artificial intelligence regulation.
OpenAI, the developer of the popular AI chatbot ChatGPT, has announced intentions to establish a “preparedness” team that will evaluate various AI-related risks.