The National Institute of Standards and Technology’s (NIST) newly formed AI Safety Institute (AISI) released its first set of draft guidance last week, offering best practices for developers of AI foundation models to manage the risks that their models will be deliberately misused to cause harm.

The guidelines on Managing Misuse Risk for Dual-Use Foundation Models – open for public comment until Sept. 9 – offer seven key approaches for mitigating the risks that AI models will be misused, along with recommendations for how to implement them and how to be transparent about their implementation.

The 27-page document from the AISI delivers on President Biden’s October 2023 AI executive order (EO) that tasked the institute, within nine months, to help AI developers evaluate and mitigate the risks stemming from generative AI and dual-use foundation models – AI systems that can be used for either beneficial or harmful purposes.

“Under President Biden and Vice President Harris’ leadership, we at the Commerce Department have been working tirelessly to implement the historic Executive Order on AI and have made significant progress in the nine months since we were tasked with these critical responsibilities,” Secretary of Commerce Gina Raimondo said in a July 26 statement. “AI is the defining technology of our generation, so we are running fast to keep pace and help ensure the safe development and deployment of AI.”

“[These] announcements demonstrate our commitment to giving AI developers, deployers, and users the tools they need to safely harness the potential of AI, while minimizing its associated risks. We’ve made great progress, but have a lot of work ahead,” she said. “We will keep up the momentum to safeguard America’s role as the global leader in AI.”

Conrad Stosz, the director of policy at the NIST AISI, explained during an ITI webinar on Tuesday that the document focuses on providing voluntary guidelines for how AI developers can manage the risk of AI models being “deliberately” misused.

“It includes things like some of the more speculative and emerging national security risks – like the potential for a foundation model to enable the development of biological weapons or to enable offensive cyber operations,” Stosz said. “But also, current harms – so things like AI being misused to develop or to generate child sexual abuse material or non-consensual, intimate imagery, also known as deep fake pornography, for example.”

The document’s seven objectives call on AI developers to anticipate potential misuse risk and establish plans for managing the misuse risk as well as manage the risks of model theft and provide appropriate transparency about misuse risk among other things.

Stosz said that the document – with its seven objectives and 21 corresponding best practices – is not meant to be “exhaustive” and that “there’s a lot of room for improvements” ahead of the final document that will be issued later this year following the consideration of public comments.

“[We] recognize that these definitely continue to evolve as time goes on, and even if you read this document six months ago, the contents would likely look different,” he said.

During a separate event hosted by CSIS today, the Director of NIST’s AISI, Elizabeth Kelly, highlighted the work the institute will begin over the next year.

“We are excited to begin testing of frontier models prior to deployment, and I think we’re in a good position to begin that testing in the months ahead, because of the commitments that we’ve gotten from the leading companies,” Kelly said. The White House has collected voluntary commitments from 16 leading AI companies – like Google, Microsoft, and Apple.

“We’re really excited for the launch of the AI Safety Institute [global] network and for this convening in November to bring together not just the allies and partners who have stood up safety institutes or similar entities for the technical conversations on things like benchmarks and capabilities risk mitigations, but also to bring together the broader civil society, academia, industry, technical experts, and I think the fact that we’re hosting in San Francisco really speaks to the U.S. leadership role here and how we want to continue to continue to maintain and grow that,” she added.

The legislation to codify NIST’s AISI – the Future of AI Innovation Act – is set to be marked up by the Senate Committee on Commerce, Science, and Transportation today.

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags