To better align its models with human values, OpenAI outlined five significant lessons learned from its $1 million grant program to involve the public in determining how artificial intelligence (AI) should be conducted.
The organization announced in May 2023 that it was preparing to distribute ten grants, each valued at $100,000, to conduct experiments to establish a democratic “proof-of-concept” procedure for establishing guidelines that govern AI systems.
The AI firm detailed in a January 16 blog post how ten grant-receiving teams innovated on democratic technology, as well as OpenAI’s implementation strategies for the new democratic technology and key takeaways from the grant program.
The post states that the teams obtained participants’ perspectives through various means and discovered that public opinion fluctuated frequently, which may affect the optimal frequency of input processes.
OpenAI discovered that to effectively capture fundamental values, a collective process must possess the sensitivity to identify significant shifts in perspectives over some time.
Moreover, some groups discovered that bridging the digital divide remains difficult, which can distort results. Participant recruitment amidst the digital divide proved to be arduous because of platform constraints and difficulties in comprehending the local language or context.
However, an agreement took a lot of work for some teams working in polarized groups, notably when a small subgroup held strong views on a particular issue.
The Collective Dialogues team ascertained that a minority vehemently opposed the restriction of AI assistants to respond to specific inquiries during each session, which resulted in dissenting opinions regarding the outcomes of the majority vote.
As per OpenAI, reconciling consensus and representing a wide range of opinions in a single outcome is quite formidable. The All-Social.The examination of voting mechanisms by the AI team revealed that approaches that ensure equal participation and reflect the strong sentiments of the people are regarded as more democratic and equitable.
Concerning apprehensions regarding the future of AI in governance, the report notes that several respondents were apprehensive about using AI in policy formulation and desired greater transparency regarding its implementation. After much consideration, however, teams expressed renewed optimism regarding the public’s capacity to direct AI.
Collective Alignment, a new group of researchers and engineers tasked with developing a system to collect and encode public feedback on the behaviors of OpenAI models into its products and services, is where OpenAI intends to implement the suggestions of the public participants.