Just days after President Biden signed a new executive order (EO) on AI, two senators introduced a bill to give that White House measure more teeth.
The Federal Artificial Intelligence Risk Management Act – introduced by Senate Intelligence Chair Mark Warner, D-Va., and Sen. Jerry Moran, R-Kan., on Nov. 2 – would require Federal agencies to follow the safety standards developed earlier this year in the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF).
Biden’s EO nods several times to NIST’s AI RMF – which is completely voluntary – but it stops short of requiring all Federal agencies to adopt its provisions. If the Federal Artificial Intelligence Risk Management Act is signed into law, the measure would have more lasting power than an EO, which could be rescinded by a future administration.
The bipartisan bill directs the Office of Management and Budget to establish an initiative to provide AI expertise to agencies. It also directs the administrator of Federal Procurement Policy and the Federal Acquisition Regulatory Council to ensure agencies procure AI systems that incorporate NIST’s AI framework. Lastly, the bill requires NIST to develop standards for testing and validating AI in Federal acquisitions.
“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” Sen. Moran said in a statement. “However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data. The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions.”
The legislation follows the White House’s issuance of draft guidelines directing all Federal agencies to appoint chief AI officers and boost its AI hires, as well as the formal launch of an Artificial Intelligence Safety Institute in the U.S.
“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” Sen. Warner said. “But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order capitalize on the benefits while mitigating risks.”
The senators’ bill drew support from prominent players in the private sector and academia, including leaders at Microsoft and Workday.
Rep. Ted Lieu, D-Calif., plans to introduce companion legislation in the House, the senators noted.
Sen. Moran had previously tried and failed to add similar language to the Senate’s year-end defense bill earlier this summer.