The House Task Force on Artificial Intelligence released its final 253-page report on Tuesday morning, providing sweeping policy recommendations and laying out a roadmap for what lawmakers are calling an “agile approach” to the new technology.  

The report details how the United States can harness AI in health, agriculture, social, and economic settings, and evaluates its potential national security uses and risks. It also addresses shortfalls in the Federal AI workforce and presents 66 key findings and 89 recommendations, organized into 14 areas. 

Lawmakers said it’s intended as a roadmap for the future, approaching AI as a rapidly changing technology that will require regular updates in guidance and regulation, writing it’s “unreasonable to expect Congress to enact legislation this year that could serve as its last word on AI policy. To use AI technology properly requires a carefully designed, durable policy framework.” 

“The report details a roadmap for Congress to follow to both safeguard consumers and foster continued U.S. investment and innovation in AI,” Rep. Jay Obernolte, R-Calif., co-chair of the task force, said in a statement. “I am confident that this report will be an essential tool for crafting AI policy that protects Americans, spurs innovation, and secures American leadership in this revolutionary technology.” 

Guiding principles used in developing the report include identifying whether issues with AI are new, promoting AI innovation, protecting against AI risks and harms, empowering the government with AI, using sectoral and incremental regulation, and maintaining human-centric policies. 

“Collaborating across party lines to find consensus is not easy, and that is especially true for something as far-reaching and complex as AI,” said Co-Chair Rep. Ted Lieu, D-Calif., in a statement.  

“Despite the wide spectrum of political views of Members on our Task Force, we created a report that reflects our shared vision for a future where we protect people and champion American innovation,” Rep. Lieu added. “We have made our best efforts based on the information we have, but with the rapid pace of change in both AI software and hardware, we are fully aware that we don’t know what we don’t know.” 

A central theme of the report provided by the 24-member task force – including 12 Democrats and 12 Republicans – calls for enforcing existing laws and regulatory frameworks before developing new regulations.  

Members warned that excessive regulation could create unnecessary redundancy, ambiguity, or conflicting guidance unless a thorough analysis shows that existing frameworks are insufficient. 

“To reduce any potential redundancy, ambiguity, or conflicting guidance, developing new policies involving AI use and procurement should start with an analysis and understanding of how existing policies and procedures can be applied,” reads the report. “Legislatively harmonious solutions and holistic operational resources spanning the AI life cycle are needed to enable consistently managed government AI systems.” 

The report also recommends sector-specific regulation over a blanket approach, noting that it would enhance collaboration between Federal agencies and other entities while allowing agencies to focus their expertise for greater efficiency. 

Creating a Federal AI resource hub that would offer expertise, data, and risk assessments could also be beneficial, lawmakers noted, adding that improving coordination among agencies through the hub would boost the sharing of best practices while preserving specialization. 

The report also focuses on avoiding stifling innovation in the private sector, noting that financial services regulations should receive a “technology-neutral” approach to regulation to allow primary regulators to leverage expertise and maintain their authority, even when it intersects with AI. “Primary regulators understand their respective fields, markets, and AI use cases within those markets,” the report says. 

While aiming to foster innovation, lawmakers also cautioned that generative AI systems – such as those creating text, images, video, and audio – raise significant concerns about content authentication. 

They concluded that no single solution could fully address these issues, noting that while technical literacy is important, it alone isn’t enough. One potential solution they suggested is digital identity technology, which could help verify online identities and reduce fraud. 

Lawmakers said that using a risk-based multi-pronged approach to content authenticity, addressing demonstrable harms rather than speculative harms, and examining existing related laws can all work to be part of a possible solution. Ensuring that victims of deepfakes and other synthetic content have tools for pursuing compensation is also necessary, the report adds.  

“Identify the responsibilities of AI developers, content producers, and continent distributors when it comes to synthetic content,” the report says. “Congress should examine legislation that helps create or identify the legal responsibilities of AI developers, content producers, and content distributors regarding synthetic content. The federal government could play a role in clarifying legal responsibilities for AI developers, content producers, and content distributors.” 

The use of generative AI to create synthetic media deepfakes has risen over the last few years, prompting the Department of Defense to invest millions into detecting deepfakes and leading the State Department to form a Federal interagency task force aimed at authenticating content and combatting deepfakes.  

According to the report, intellectual property (IP) rights should also be protected by clarifying IP rights and protection for AI-assisted creations and inventions, addressing inventorship patent laws, and increasing transparency of copyrighted works used for AI training and the role of AI in generating outputs. 

“Although digital replicas and deepfakes have existed for many years, AI technology has vastly amplified the size of the problem by making high-quality, realistic replicas accessible to nearly anyone with little effort,” wrote task force members. “Congress could address this problem in several ways. One way would be to empower individuals to protect their identity-based rights and establish nationwide protections while avoiding encroaching on speech that is protected by the First Amendment.” 

The final report comes as the Trump administration prepares to move into office next month on Jan. 20, and comes on the tail end of a flurry of AI bills – totaling over 100 this session, though many didn’t make it to President Biden’s desk – and as the 119th Congress is readied to swear into office in early January.   

“The Task Force members feel strongly that Congress must develop and maintain a thoughtful long-term vision for AI in our society,” the report reads. “This vision should serve as a guide to the many priorities, legislative initiatives, and national strategies we undertake in the years ahead.” 

Read More About
About
Weslan Hansen
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags