OpenAI intends to execute its “Preparedness Framework,” which entails forming a specialized risk assessment and prediction unit known as the preparedness team.
On December 18, the organization announced in a blog post that its newly formed “Preparedness team” will link OpenAI’s safety and policy teams.
As stated, these teams, which provide a system resembling checks and balances, will aid in safeguarding against “catastrophic risks” posed by ever-more-powerful models. OpenAI has stated that it will only implement its technology if it is considered secure.
The newly established plan outlines the responsibility of the advisory team to assess the safety reports before disseminating them to company executives and the OpenAI board.
Although the ultimate decision-making authority lies with the executives, the revised strategy allows the board to overturn safety-related judgments.
This follows a period of significant transitions for OpenAI in November, which included the unexpected dismissal and subsequent reinstatement of CEO Sam Altman.
Following Altman’s reappointment, the organization issued a statement introducing its new board, which consists of Larry Summers, Adam D’Angelo, and Bret Taylor as chairs.
Since November 2022, when OpenAI made ChatGPT available to the public, there has been a surge in interest in artificial intelligence. Still, there are concerns regarding the risks it may pose to society.
The establishment of the Frontier Model Forum in July by prominent AI developers such as OpenAI, Microsoft, Google, and Anthropic aimed to oversee the self-regulation of responsible AI development.
In October, President Joe Biden of the United States of America issued an executive order outlining new AI safety standards for developing and implementing high-level models by corporations.
Before Biden’s executive order, the White House invited eminent AI developers, including OpenAI, to pledge their dedication to developing secure and transparent AI models.