The Department of Homeland Security (DHS) on April 29 offered up new guidance and analysis on the use of artificial intelligence technologies in two particularly sensitive areas: weapons of mass destruction (WMD) and the 16 U.S. sectors classified by the Federal government as critical infrastructure.

The agency’s release of the guidance documents met 180-day deadlines set by President Biden’s AI executive order issued on Oct. 30, 2023.

On the critical infrastructure front, DHS published the new Safety and Security Guidelines for Critical Infrastructure Owners and Operators in coordination with its Cybersecurity and Infrastructure Security Agency (CISA) component.  DHS is the Federal government’s sector risk management agency for 11 of the 16 critical infrastructure sectors.

The new guidance, DHS said, addresses “cross-sector AI risks impacting the safety and security” of U.S. critical infrastructure sectors and organizes its analysis over three categories of system-level risk: attacks using AI; attacks targeting AI systems; and failures in AI design and implementation.

DHS is providing a four-part mitigation strategy to those concerns that it said critical infrastructure owners can use. Those strategies include establishing an organizational culture of risk management, understanding AI use context and risk profiles, developing systems to track risks, and acting on risks to safety and security.

“CISA was pleased to lead the development of ‘Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators on behalf of DHS,” commented CISA Director Jen Easterly.

“Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk,” she said.

Separately, DHS issued select portions of an EO-directed report to President Biden written by DHS’s Countering Weapons of Mass Destruction (CWMD) Office on “the potential for AI to be misused to enable the development or production of CBRN [chemical, biological, radiological, and nuclear] threats, while also considering the benefits and application of AI to counter these threats.”

A fact sheet issued by DHS describes some of the pros and cons on the way that scientific research, and the need for AI technology governance to be “adaptive and iterative to respond to rapid or unpredictable technological advancements,” among other topics.

“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges, and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” commented Mary Ellen Callahan, DHS assistant secretary for CWMD.

“This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure, and trustworthy development and use of AI,” she said.

Read More About
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags