Schumer-led bipartisan senators propose a $32 billion investment over three years to develop AI and enforce safeguards, aligning with recent bipartisan efforts in AI regulation and promotion.
Four senators representing both parties are guided by Majority Leader Chuck Schumer in a bipartisan group. Their recommendation is that Congress allocate a minimum of $32 billion over the course of the next three years to develop and implement safeguards for artificial intelligence (AI).
The roadmap is an additional initiative by the United States government to promote and regulate AI development. It occurred six days after U.S. lawmakers introduced a bipartisan measure to assist the Biden administration in imposing export controls on leading AI models developed in the United States.
Following months of deliberations involving industry experts and AI detractors, the bipartisan working group has determined that investing in AI is crucial for the United States to maintain its competitive edge over international rivals and enhance the quality of life for Americans. This includes funding technologies that have the potential to cure chronic diseases and cancer.
Although not a formal bill or policy proposal, this roadmap offers insight into the extent and scale of what legislators and pertinent stakeholders foresee regarding forthcoming AI legislation. It establishes a foundation for subsequent policies that are more comprehensive and detailed.
Aside from addressing any gaps or unintended harmful bias, the senators’ proposal also calls for enforcing “existing laws for AI,” which includes developing use case-specific requirements for AI transparency and explainability, prioritizing the development of standards for testing to comprehend potential AI harms, and addressing any gaps or unintended harmful bias.
The group additionally suggested implementing fresh transparency standards for the introduction of artificial intelligence products and undertaking research on the potential ramifications of AI on employment and the labor force in the United States.
The AI Working Group is not at the forefront of initiatives to regulate the exponential growth of generative AI (genAI) and the adoption of AI in general. The AI Safety Institute Consortium (AISIC), an alliance of more than two hundred organizations established by NIST in February, was dedicated to the development of safety guidelines for AI systems.
Experts assert that the United States lags behind several other nations, including the European Union, which has established a substantial regulatory lead in artificial intelligence. The EU enacted a comprehensive new law governing AI in all 27 member states in March, placing the United States under pressure to catch up.
The legislation established protections for AI designed for general-purpose use, restricted the application of biometric identification systems by law enforcement, prohibited online social scoring and AI manipulation or the exploitation of user vulnerabilities, and granted consumers the ability to file complaints and receive “meaningful explanations” from AI providers.