The Biden-Harris administration released its artificial intelligence (AI) executive order (EO) at the end of 2023, but most of the work to implement the directive took place in 2024.

As MeriTalk reflects on the past year – and looks toward the future of Federal IT work during the Trump-Vance administration in 2025 and beyond – AI is positioned to make impactful strides across agencies, policy, and defense.

AI Use Across Federal Agencies Included Workforce Bumps, Updated Policy Roadmaps, and Loads of New Tools

AI’s footprint  across agencies in 2024 more than doubled, with Feds reporting this month over 1,700 use cases for the tool. Last year, agencies only reported 757 uses.

The Treasury Department announced this year that it has recovered $1 billion in fraud and improper payments by leveraging AI. A top Department of Energy (DoE) official said the agency is using AI tech for missions that range from responding to emergencies to planning energy investments and determining the risks of nuclear weapons.

Meanwhile, the National Oceanic and Atmospheric Administration’s (NOAA) Chief Technology Officer (CTO) said recently that the agency has been working its AI use cases for a wide range of scientific inquiries – from “the surface of the sun to the bottom of the ocean.”

CTO Frank Indiviglio said NOAA is leveraging AI to improve environmental monitoring and management, which allows their field workers to be more efficient at their job.  “We’re expanding the workforce without expanding the workforce,” he added.

AI and the workforce have been front and center in 2024, with the White House Office of Management and Budget (OMB) issuing its finalized policy document for the use of AI within Federal agencies in March. Among other things, the policy requires the role of the Chief AI Officer (CAIO) at the largest Federal agencies.

The Department of Homeland Security (DHS) was also focused on equipping the agency with AI experts this year, launching an “AI Corps” initiative at the beginning of the year.

Two major initiatives from Biden’s AI EO also progressed this year: the National Science Foundation’s (NSF) National AI Research Resource (NAIRR) and the National Institute of Science and Technology’s (NIST) AI Safety Institute.

NSF launched NAIRR early this year to serve as a shared national infrastructure to support the AI research community and power responsible AI use. NIST’s AISI also made strides, launching its first set of draft guidance that offers best practices for developers of AI foundation models to manage the risks that their models will be deliberately misused to cause harm. AISI also signed first-of-their-kind research and testing agreements with Anthropic and OpenAI – a key step in allowing the government to access and assess the companies’ latest AI models. The institute also held the inaugural convening of the International Network of AI Safety Institutes in San Francisco late last month.

Finally, the Biden-Harris administration delivered on one last major initiative of its AI EO: the first-ever National Security Memorandum on AI.

The NSM directs the government to implement concrete and impactful steps to ensure that the U.S. leads the world’s development of safe, secure, and trustworthy AI; harness cutting-edge AI technologies to advance the government’s national security mission; and advance international consensus and governance around AI.

Federal AI Policy Top of Mind for Lawmakers, but no Concrete Progress Made

While state and local AI laws continued to advance in 2024, Congress failed to pass any major Federal AI legislation before the end of the 118th session.

However, both House and Senate AI leaders did unveil their wish-lists for AI policy this year – laying out a roadmap for legislators to follow in 2025 and beyond.

Senate Majority Leader Chuck Schumer, D-N.Y., unveiled a bipartisan roadmap for AI policy in the Senate in May. The 31-page document highlights eight AI policy priorities for Senate committees to consider, including boosting Federal funding for AI – to the tune of $32 billion per year – and passing a comprehensive Federal data privacy law.

In September, the Senate AI Caucus introduced a package of bipartisan AI legislation, with aims to increase AI literacy, enable the use of AI to enhance the efficiency of U.S. shipyards, spur innovation in financial services, and improve healthcare outcomes.

The House of Representatives launched its AI Task Force in February and unveiled its blueprint for AI policy in December. The more than 200-page document presents 66 key findings and 89 recommendations on how the U.S. can harness AI in health, agriculture, social, and economic settings, and evaluates its potential national security uses and risks.

AI was also front and center for lawmakers during this year’s election cycle.

The 2024 election marked the first presidential race in which deepfakes – which use a form of AI called deep learning to create fake images or videos – became mainstream. Government agencies such as the FBI, National Security Agency (NSA), and Cybersecurity and Infrastructure Security Agency (CISA) have warned that deepfakes are a top political threat.

Lawmakers pushed to pass some AI regulations before the election season was in full swing. Members of the House’s bipartisan Task Force on AI introduced a bill in March that aims to protect Americans from AI-generated content during the 2024 election cycle by setting standards for identifying AI content – such as watermarking. The measure did not advance through Congress.

Earlier this year, CISA Director Jen Easterly testified before Congress on the threat AI poses to the 2024 elections. Easterly said CISA “continues to provide guidance on the tactics used by adversaries,” and it maintains and develops resources to protect and support state and local election officials, such as the ‘Rumor vs. Reality’ website to combat false narratives.

As Republicans prepare to take over control in both the House and Senate, it remains unclear what path the incoming administration will take on regulating AI. With the appointment of his new “AI czar,” David Sacks, experts have speculated that President-elect Trump will create a “pro AI” platform – possibly throwing out Biden’s AI EO altogether and taking more of a deregulatory approach to the technology.

Pentagon AI Programs, Policy Evolve

The Defense Department continued to make strides in AI this year, announcing in December that it is sunsetting its organization that developed, evaluated, recommended, and monitored generative AI capabilities across the department – dubbed Task Force Lima – and is replacing it with a new AI Rapid Capabilities Cell (AI RCC).

The AI RCC will be focused on accelerating DoD adoption of next-generation AI into 2025 and beyond, including generative AI, and will be managed by the Chief Digital and Artificial Intelligence Office (CDAO) in partnership with the Defense Innovation Unit (DIU).

The CDAO also unveiled plans to re-compete its Advana enterprise data and analytics platform this year, transitioning from its current contracting vehicle to a new model, with plans for a 10-year, multi-vendor contract worth up to $15 billion.

Not to mention, the DoD also unveiled two chemical, biological, radiological, and nuclear (CBRN) quadrupedal unmanned ground vehicles – otherwise known as robot dogs – at Buckley Space Force Base in Aurora, Colo., this year. The four-legged robot, affectionately called CHAPPIE, has been in use on the base since July.

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags