Government agencies have great interest in using ChatGPT and other generative artificial intelligence (AI) technologies but are proceeding cautiously amid security and other obstacles, Federal officials said at an AI forum on Aug. 2.

“Obviously, everybody’s excited about this. You can see it in the press, we can feel it on the staff in the Pentagon. People want access to these tools to be able to improve their workflows and develop new capabilities,” Lt. Col Joseph Chapa, chief responsible AI officer for the U.S. Air Force, said during a panel discussion on ChatGPT at ATARC’s “Advancing Missions with AI Summit” event.

Chapa said the Department of Defense (DoD) has been experimenting with smaller versions of OpenAI’s ChatGPT, which has generated intensive debate over its potential applications but also risks since being launched late last year.

“We’ve brought much smaller (language) models … inside of our secure networks at the secret level,” he said. “It’s not going to have the same language performance, but it does allow us to use these tools on our specific classified data set. And that’s very exciting.”

But Chapa also warned that the potential for misinformation – and especially “security concerns” – present barriers to DoD “adopting large language models at scale.”

He added: “The thing that scares me is that I know people are using these models now. But they’re not doing it through our data platforms and government funded equipment. I think they’re doing it on their own home computers. And as long as they’re doing that without touching controlled unclassified information, that’s great, but I suspect that at some point there will be some spills and some leaks based on people trying to do government work on their on their personal devices.”

His comments mirrored the nationwide discussion about ChatGPT, part of a broader debate about AI – and how Federal agencies can responsibly use it – as the technology rapidly advances. A recent Government Accountability Office (GAO) report said generative AI, such as ChatGPT and Google’s Bard, has exploded to more than 100 million users and that the technologies present vast opportunities for creating  a variety of content and automating administrative or other repetitive tasks.

But concerns about misinformation, bias, and other potential harms have led to widespread calls for Federal regulation of AI tools, including from OpenAI’s CEO.

Inside the U.S. Coast Guard, IT leaders are “very AI curious,” CDR Jonathan White, branch chief for cloud and data, said at the ATARC forum. “What I find most exciting right now is we are a little late to the game,” he said. “I think this organization is a little nascent in AI. We have some limited use cases that we’ve done, large scientific models. It’s a very small slice of the AI pie but very tempting [and] attention grabbing so it looks like it’s larger than it really is.”

Going forward, White said, he would like to see the Coast Guard use generative AI to help overstretched officers with more mundane tasks. “It is extremely hard to place people in the right spots all the time,” he said. If officers “are in the middle of an intersection or middle of a boarding … wouldn’t it be great to have a digital assistant with them to surface the information that they need to conduct that inspection?”

But he added that the agency is proceeding cautiously, adopting the mindset of “watch what’s happening in industry, understand what use cases are working and which cases are not working. The last thing we want to do is go down the road where we invest a lot of money in a particular technology, and it doesn’t bear fruit.”

At the National Aeronautics and Space Administration (NASA), officials face “a lot of data restrictions and security limitations” in their potential use of AI, David Na, a NASA IT specialist whose portfolio includes AI and machine learning, said at the forum.

Yet the agency, he said, is working “to bring in these large language models as a service across NASA … working to get research scientists access to these models to use in their data.”

Na said he is personally “excited” about the technology underlying generative AI. “What people don’t realize or often forget is that the theory behind these large language models was developed decades ago. The problem was the tech wasn’t there. And now the tech is there. It’s growing…and as that continues to grow only time will tell, but it’s impossible to really predict how fast these models could be.”

He added: “It’s also a security concern because as these models grow more and more, as they become more dense, who’s to say what they can and can’t do?”

Read More About
Recent
More Topics
About
Jerry Markon
Tags