The Trump administration is seeking feedback on its artificial intelligence action plan while Congress plans its next move in AI regulation – which may include propping up a new AI task force and codifying the AI Safety Institute.
In a request for information (RFI) published to the Federal Register on Feb. 6, the Office of Science and Technology Policy (OSTP) and the Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO), said they are accepting input on an AI action plan after President Donald Trump signed an executive order (EO) during his first week in office to build “America’s global AI dominance.”
Respondents will have until March 15 to provide input while the Trump administration develops a platform that will eliminate “unnecessarily burdensome requirements” that may “hamper private sector AI innovation,” an ode to Trump’s strong criticism of the Biden-era EO on AI which the Republican party had long argued stifled AI innovation.
“The Trump Administration recognizes that with the right government policies, the United States can solidify its position as the leader in AI and secure a brighter future for all Americans,” reads the RFI.
While the White House ramps up its efforts to promote innovation and take a largely deregulated approach to AI, Rep. Jay Obernolte, R-Calif., who co-led the last Congress’s House Task Force on AI said on Tuesday that legislators are discussing what this Congress’s AI task force will look – but they are facing pushback from other existing policy committees.
“I am very much convinced that we still need a nucleus of people that are dedicated to action on this issue, and I feel like we completed the planning phase and came out with this report, and so now we have concrete steps that need to be taken, and I think we need a place to launch the legislation that’s going to implement that,” said Rep. Obernolte speaking at the 2025 State of the Net event.
“We’re getting some pushback from the existing policy committees, feeling like we want to steal their jurisdiction away, which we’re not trying to do,” continued the representative.
The new task force would focus on preemptive action, Obernolte envisioned, saying that one of the goals of the House Task Force on AI’s final report released in December is to avoid a “patchwork” of different state regulations on AI which could limit innovation.
The final report, put together by the 24-member bipartisan task force, detailed how the United States can harness AI in a 14 socio-economic areas and provided 65 key findings and 89 recommendations. Upon its release, lawmakers said it’s intended as a roadmap for the future, approaching AI as a rapidly changing technology that will require regular updates in guidance and regulation while filling a gap in Federal regulation of the technology.
“The only way we [prevent individual state regulation] is to start implementing some of these recommendations, we can’t preempt something with nothing, so we need to give states that confidence, and I think if we do, then we’re going to get to a place where we can define where the boundaries of preemption are going to go, and people will be accepting of it,” said Rep. Obernolte.
Obernolte also said that the AI Safety Institute – which doesn’t currently have a permanent leadership team and is housed within the National Institute of Standards and Technology (NIST) – would also aid in creating sectoral regulations, another primary recommendation of the final report.
The representative said that he plans on introducing legislation soon that would codify the institute, which would then be responsible for developing testing and evaluation methodologies, putting together different standards, creating regulatory sandboxes for malicious AI testing, and organizing a pool of technical talent.
“These [standards set by the safety institute] are not necessarily compulsory or mandatory … These standards are just tools, and it’ll be up to the sectoral regulators to figure out which of these testing and evaluation methodologies to use and which not,” said Rep. Obernolte.
The lawmaker also said that he thought President Trump’s decision to repeal the Biden-era EO on AI wasn’t “right,” noting that while there were parts of it that were bad, he thought that there “were a lot of parts that were good too.”
“There was a lot in that EO that dealt with the way that AI can be used to enhance the efficiency of government and enhance the efficacy by which we provide government services to people,” said Rep. Obernolte. “Deep thinking went into those parts of the executive order, and those are the parts I think that need to be kept.”
While scant details were provided in Trump’s EO of what to expect from the action plan, respondents were asked to provide “concrete AI policy” suggestions on topics such as hardware and chips; data centers; energy consumption and efficiency; model development; open-source development; application and use; risks, regulation, and governance; technical and safety standards; national security and defense; and other AI-related areas.
![Weslan Hansen](https://www.meritalk.com/wp-content/uploads/2024/07/Weslan_Hansen_Headshot-e1721246033890.jpg)