Top Federal and industry IT experts said today that operationalizing AI at scale across the government requires leveraging existing governance frameworks, including President Biden’s recent AI executive order, to overcome hurdles with the emerging technology – like bias and transparency.

The Department of Homeland Security (DHS) is accelerating the adoption of AI across its component agencies by having important conversations at the front end of the development of the technology, a top Transportation Security Administration (TSA) official said.

“The department is really leaning in at DHS and has created a synergy at the front end of the conversation … as the components of the department are exploring technology to bring everyone together,” Matt Gilkeson, the division director for TSA’s innovation task force, said during MeriTalk’s Accelerate AI Forum in Washington, D.C. today.

“They formed the Responsible Use Group, they call it the RUG. They’ve got Privacy and Civil Rights and Civil Liberties, they’ve got the components in the room, and having a conversation at the front end about how we enable this to go forward with the right balance of policy and governance, but with the appropriate safety, civil rights … and the acceleration of that adoption,” Gilkeson said.

The TSA is currently testing AI in two primary use case areas: biometrics and security detection, Gilkeson said.

“We have to scan people and we have to scan the property,” he continued, adding, “Traditionally, those were algorithms that were developed by software developers and companies and now they’re being informed by machine learning models.”

James Donlon, the director of solution engineering at Oracle, highlighted that it’s important for agencies to begin testing AI models now, but in a safe environment.

“Do something and do it now, but that doesn’t mean do everything, it means test but in an environment where you are likely to know the results,” Donlon said.

Gilkeson noted that DHS has done important work in the last couple months implementing employee training on generative AI tools, approving generative AI tools for use by its workforce, and issuing detailed use policies around generative AI tools.

Dorothy Aronson, the chief data officer and chief AI official (CAIO) at the National Science Foundation (NSF), said during the panel discussion that accepting AI and training the Federal workforce on the emerging technology is critical.

“I don’t look at this as something that we have the option of stopping,” Aronson said. “We need to explain to the world this is a must. But do it in a soft way so that it feels like a natural adoption.”

“Everyone is going to have access to these tools whether we bring them in house or not,” Aronson continued, adding, “So if you don’t train the people, they’ll misuse it. I think we have to run as fast as we can to get this done.”

The panelists agreed that building up AI use cases should be a priority for the Federal government. The Government Accountability Office (GAO) recently reported that agencies have 1,200 current and planned AI use cases.

However, they argued that AI is not possible without effective data standards.

“It comes back to what your data governance is, what your data standards are, because if you’re going to go after use cases, you’re going to have to have your data house in order, I think, as the first order of business,” Gilkeson said.

NSF’s CAIO agreed, noting that “basic heavy lifting” must be done in order for agencies to effectively implement AI to meet mission outcomes.

“Find your data,” Aronson said. “Find the data that people are going to want to use and document what you’ve got.”

She concluded, “Some really basic heavy lifting has to be done for any of this AI to work, and so if you don’t have a really solid data catalog yet, start working on that.”

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags