The director of artificial intelligence at the Office of Management and Budget (OMB) said on Nov. 9  that OMB’s draft AI guidance to Federal agencies – released early this month – takes a broad approach that encompasses both traditional and generative AI.

“What this document would do is set a baseline of absolute minimum thresholds that agencies need to cross when using AI in potentially risky ways,” Conrad Stosz said during an IBM event in Washington.

“As we get more specifics, whether it’s the specifics of a particular agency use case or the limitations of a particular technology, I think we’re expecting that agencies are going to have to continue to build on top of that, and using additional and more comprehensive frameworks such as the AI Risk Management Framework and future guidance coming from [the National Institute of Standards and Technology] on generative AI,” Stosz said.

OMB released its draft AI guidance on Nov. 1, on the heels of the Biden administration’s sweeping AI executive order. The 26-page document – which is open for public comment until Dec. 5 – aims to establish AI governance structures in Federal agencies, advance responsible AI innovation, and manage risks from government uses of AI.

It has a range of marching orders for Federal agencies, including appointing chief AI officers, and adopting a lengthy list of safeguards for agencies to follow while developing applications of AI tech.

“It’s certainly our view that generative AI is, of course, here to stay and will have benefits to the government,” Stosz said. “The government and many agencies deal with a lot of text data generally and they often have a lot of ways in terms of their ability to process large amounts of literacy – grant applications, public comments, large amounts of text – and it’s clear that generative AI is useful in these sorts of tasks.”

He continued, “That being said, the technology still has its limitations, the norms around its use and its expectations around how it can be used responsibly are still developing and we really think that agencies should be experimenting and learning about those limitations and learning about what it can help them do while putting in place safeguards.”

Stosz said OMB wants agencies to “take a confidence approach” and “start small” with their generative AI experiments. A prime example, he said, would be to begin testing large language models to change and enhance the way the government interacts with the public during comment periods. However, there needs to be safeguards in place to recognize some of the limitations, Stosz reiterated.

“The U.S. government serves all people and all different types of speaking and language” so agencies need to “make sure there is an element of human involvement in review and experimentation that prove [AI tools] work the way they’re supposed to,” he said.

Stosz highlighted that agencies need to begin grappling with their employees’ use of AI now.

“Instead of just saying, ‘we’re going to ban everything with generative AI on the internet,’ instead trying to take a risk-based approach where we identify for particular services, do they have a use policy? What do they do with the data and understand how those services are actually exposing potentially government information,” Stosz said.

He added, “We’re going to see a lot of shades of grey here. But the core is establishing and understanding how to interpret these existing policies so that we protect Federal information when using generative AI without sticking our heads in the sand and pretending it doesn’t exist.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags