
A pair of bipartisan senators introduced new legislation on Monday that would require a new evaluation program for advanced AI systems before they can be deployed, with steep penalties for developers who do not comply.
Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., introduced the Artificial Intelligence Risk Evaluation Act to create an “Advanced Artificial Intelligence Evaluation Program” within the Department of Energy (DOE). The program would track safety concerns related to national security, civil liberties, and labor protections.
Specifically, the bill would require developers of advanced AI systems – such as those that may attain superintelligence – to participate in the program and submit information to the DOE on their model before deployment.
Developers who do not participate in the program would be fined a minimum of $1 million each day they don’t submit the required information.
“As Big Tech companies continue to develop new generations of artificial intelligence, the wide-ranging risks of their technology continue to grow unchecked and underreported,” said Sen. Hawley in a statement.
“Simply stated, Congress must not allow our national security, civil liberties, and labor protections to take a back seat to AI. This bipartisan legislation would guarantee common-sense testing and oversight of the most advanced AI systems, so Congress and the American people can be better informed about potential risks,” he continued.
The evaluation program will use standardized and classified testing of advanced AI systems to estimate the likelihood of adverse AI incidents, test anticipated real-world AI jailbreaking techniques, use third-party assessments, create containment protocols and contingency plans, and build evidence-based standards and regulations based on collected data, according to the bill’s text.
The program will also support Congress in helping to determine how controlled AI systems may attain superintelligence or exceed human oversight or operational control, including whether they “pose existential threats to humanity.”
This includes requiring that the secretary of energy annually report to Congress with a recommended plan for federal oversight of advanced AI systems.
Those “loss-of-control” scenarios as described by the bill’s text include the system behaving contrary to its instruction, deviating from rules established by developers, operating beyond its intended scope, subverting oversight or shutdown mechanisms, or otherwise behaving “in an unpredictable manner so as to be harmful to humanity.”
“AI companies have rushed to market with products that are unsafe for the public and often lack basic due diligence and testing. Our legislation would ensure that a federal entity is on the lookout, scrutinizing these AI models for threats to infrastructure, labor markets, and civil liberties – conducting vital research and providing the public with the information necessary to benefit from AI promises, while avoiding many of its pitfalls,” said Sen. Blumenthal.
The senators introduced legislation in July to protect content creators from AI and supported a bipartisan legislative framework to create AI guardrails during the last Congress.
Standardized assessments and testing of AI systems have been priorities for the White House Office of Science and Technology Policy Director Michael Kratsios, who has zeroed in on the need for better risk assessments.