Generative AI technology is powering new opportunities for Federal agencies, but Department of Defense (DoD) officials on Tuesday also warned of the risks of generative AI – such as the benefits it offers to adversaries and the technology’s expensive operating costs.

At the Nov. 19 Red Hat Government Symposium in Washington, D.C., Federal officials discussed the importance of knowing the risks when experimenting with generative AI.

“It’s certainly a lot of opportunity, but there’s certainly lots of risk there,” said Rear Adm. Dennis Velez, the chief of staff at U.S. Cyber Command. “I think the biggest risk for us is adversaries finding, the same way that we’re finding ways to use [AI] today, to find weaknesses, etc., in our security infrastructure.”

Velez said the worry is that adversaries would be able to find and exploit those vulnerabilities “at speed and scale … that we will not be able to respond to.”

Michelle Davis, who moderated the conversation and serves as the director of solutions architects at Red Hat, validated Velez’s concern, saying, “As generative AI is getting into the hands of most folks, it is also getting into the hands of our adversaries, and we have to at least stay one step ahead of this constant changing threat.”

Col. Daniel May, the chief AI officer for the Air Force intelligence community, said that while the Air Force has found “very innovative ways to apply AI and generative AI,” it does come with a cost.

The Air Force chief information officer (CIO) and the Air Force Research Laboratory (AFRL) launched an experimental AI-powered chatbot for the Air Force in June. The NIPRGPT platform aims to provide airmen, guardians, and civilian Air Force officials with an AI chatbot that facilitates human-like conversations to complete various tasks.

However, May said that the NIPRGPT “deployment is difficult” – and costly.

“I think to some extent, people don’t understand how expensive those models are to operate, and that just the sheer compute resources that you need to be able to query a NIPRGPT, it can run up the cloud bill substantially,” May said. “So, if every airman is sitting at their desk all day typing questions in the NIPRGPT, you’re going to owe a lot of money for that.”

Velez agreed, adding that “understanding the cost is certainly a factor in determining what you need,” as pricing can quickly become expensive.

Additionally, May said that one also needs to consider the impact on airmen’s skills if capabilities like NIPRGPT were to be rolled out force-wide.

“If we roll these capabilities out force-wide, are we going to lose skills? And one example is, if I don’t exercise my cognitive functions in doing something like writing, because I’m relying on GenAI to write papers … do I lose that faculty, and how quickly do I lose it? And to what extent do I lose it, or don’t I lose it at all? And so, I think there are a lot of questions as far as an impact on the GenAI use to the force that we’re just starting to consider,” May said.

Nevertheless, both May and Velez agreed that generative AI also comes with a wide range of opportunities. As Velez put it, GenAI is “giving people time back to be able to do other functions that they might not be able to do” without it.

Read More About
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags