As artificial intelligence technologies become more integrated into government operations, two military officials said on Feb. 13 that agencies can adopt a range of strategies to mitigate risks associated with AI adoption while also fostering more positive perceptions of the technology among their workforces.
Speaking at ATARC’s Public Sector AI Summit, Col. Travis Hartman, chief technology officer at the U.S. Army Forces Command, explained that he has team members experiment with AI technology in their areas of expertise, allowing them to critically assess its accuracy.
“Having people look at it and work with it initially where they have that domain expertise, it helps instill that caution that they’re not going to be quite as accepting,” said Hartman. “They’ll look at it with a more critical eye, and in many cases, they will run some of the more risky areas past domain experts to make sure this is really the way you do it.”
James Palumbo, deputy command information officer at the Naval Facilities Engineering Systems Command, also shared the importance of listening to employees before implementing AI solutions, and said that process can help make them become advocates for AI adoption.
“I think one of the main mistakes we make in our career field is, hey, we’ve got a solution – throw it out there,” said Palumbo. “The first thing we need to do is just listen to some folks, and you’ll get a clear idea of what’s frustrating them … give them an opportunity to point [to] where the AI is going to go, as opposed to us trying to force it down.”
“Being able to listen through them, understand their process, and focus the solution to meet that day-to-day gripe that they have, then they become an advocate, and they start to sell it to the rest of their community,” Palumbo continued.
Creating annual AI training regimens can also help mitigate risk and improve AI use among Federal employees, added Allen Hill, chief information officer at the Federal Communications Commission. Risk mitigations include making sure the workforce knows how to properly prompt AI and consider guardrails before implementing AI too quickly, he said.
“I believe no one should be using AI tools without proper training on the tool,” said Hill. “We need to be very deliberate in the use cases and making sure that we have the guardrails in place so that one the people understand how to use it, create the efficiencies we can gain out of it, and also make sure that we able to use it in an efficient manner that does not create more handcuffs on us too.”
