The Department of Homeland Security (DHS) released a long-awaited report on Friday that offers guidance on how to combat potential threats that AI technologies could pose in the chemical, biological, radiological, and nuclear (CBRN) realms.

The report is a requirement stemming from President Biden’s AI executive order issued in late October 2023. The EO called for a report to the president that includes an assessment of the AI models that may pose CBRN risks to the United States, as well as recommendations for the use of these models.

The DHS Countering Weapons of Mass Destruction Office (CWMD) prepared the 24-page report, which was completed in late April but only made public on June 21.

“The report is meant to provide longer-term objectives around how to ensure safe, secure, and trustworthy development and use of artificial intelligence, and guide potential interagency follow-on policy and implementation efforts,” DHS Secretary Alejandro Mayorkas said in the report.

Mayorkas said the report was developed in collaboration with experts in AI and CBRN, the Department of Energy, private AI laboratories, academia, and third-party model evaluators.

The report first outlines current trends in AI, including that responsible use of AI holds great promise while potential misuse poses consequential risk. It then offers many recommendations to mitigate potential AI threats to national security.

One key recommendation the report makes is to incorporate AI-specific CBRN topics into regular actionable intelligence and threat information sharing, reporting, and engagements among Federal agencies and with state, local, tribal, and territorial (SLTT) stakeholders and partners to mitigate threats and risk.

The report mentions the DHS Artificial Intelligence Safety and Security Board as one mechanism to promote information sharing and establish best practices.

Another recommendation is to develop programs from a designated Federal government entity to educate policymakers, scientists, and the public about the capabilities and risks associated with the use of AI.

The report also recommends the “adoption of guardrails to protect against reverse engineering, loss, or leakage of sensitive AI model weights by both non-state and state actors.” This could include cybersecurity and insider threat training or investments in insider threat programs, according to the report.

It also recommends that developers should use heightened standards for accessing high-risk specialized tools and services, such as biological design and chemical retrosynthesis tools. DHS said these concepts could take advantage of established implementation models like the Cybersecurity and Infrastructure Security Agency’s “Secure by Design” initiative.

AWS Summit
Tailored for the public sector community. Join us Jun 26-27. Learn more.

Additionally, DHS aims to build on the White House’s voluntary AI safety commitments by creating “a standard framework for the release of AI models for pre-release evaluations and red teaming of AI models by third parties and post-release reporting of potential hazards for foundation models to accrue information.”

“My vision for CWMD for 2024 is PREPARE-CONNECT-TRANSFORM and this report is an exemplar of each of these three areas,” Mary Ellen Callahan, assistant secretary of the DHS CWMD, said in a statement. “CWMD remains committed to ensuring the safety of the Americans from CBRN threats, and through a whole-of-community approach realizing the promise and mitigating the threat posed by AI in this space.”

Read More About
Recent
More Topics
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags