The House Artificial Intelligence Task Force’s upcoming final report will focus on breaking up regulations into “bite-sized pieces” and not blocking future advances in AI technology, said task force co-chair Rep. Jay Obernolte, R-Calif., on Wednesday.
Speaking at the Amazon Web Services-sponsored Capitol Hill Cloud Day on Dec. 4, Rep. Obernolte said the report – which he said will be released sometime this week – and its recommendations aim to serve as a template and guide for future congressional policies, while addressing AI as a multi-dimensional issue.
“This is obviously not the last word in AI, it is just the first beginning of the first word in AI,” said Rep. Obernolte. “We think it is foolish to believe that we know enough about AI and the direction AI is going to move in the next few years to be able to do an effective job completely regulating with one bill next year.”
Rep. Obernolte noted that the report includes evaluating how to have consolidated Federal regulation without overreaching the authority of states.
“If we allow all 50 states to create this patchwork of 50 different regulations on what AI can be deployed and what can’t be, what has to be tested and what doesn’t . . . that’s going to create an atmosphere that not only is destructive to innovation, but is very harmful to entrepreneurialism,” the congressman said.
Acknowledging that states have voiced concern that the Federal government lacks the ability to provide effective regulation – especially with regard to data privacy – the report aims to respond to that concern. “This is one thing that I hope that our task force report gives people some comfort on, is that Congress is capable of acting on this issue,” he said.
Rep. Zach Nunn, R-Iowa, who joined Rep. Obernolte at the Dec. 4 event, added that Congress needs to first focus on developing and harmonizing AI regulations in the Federal government before attempting to regulate industry, while also striking a balance between providing safeguards for industry and allowing for innovation.
“There are some government agencies that know AI well, have worked effectively in this space, and then candidly, there are other government agencies that have a hard time spelling AI,” said Rep. Nunn. “We really need to make sure that they are aware of what their requirements are before they get ahead of themselves, telling others – particularly the private sector – how they should or should not be doing this.”
Additional priorities for AI regulation include maintaining the National Institute of Standards and Technology’s AI Safety Institute (AISI), especially with the incoming Trump administration whose stance on the institute remains unclear, Rep. Obernolte said. AISI, established at the end of 2023, focuses on the safe and responsible development, deployment, and governance of AI technologies.
“The safety institute … is going to be very critical for the success of artificial intelligence in the United States,” said Rep. Obernolte, explaining that AISI maintains U.S. leadership in AI safety standards on an international level, and aids in designing sectoral regulation.
“We need to have federal agencies like the Artificial Intelligence Safety Institute take the lead in developing standards for testing evaluation, because one of the things that we are embracing in the United States is this concept of sectoral regulation,” Obernolte explained. “That has proved to be a very effective model, but it will only work if we empower those sectoral regulators with the resources and tools necessary to do their job.”