Federal government and industry officials talked about the importance of adopting responsible artificial intelligence practices during a panel discussion on Oct. 2, and in particular making sure that use of the technology matches up with an organization’s values before going ahead with deployment.  

Speaking at the Oct. 2 Chief Artificial Intelligence Officer (CAIO) Summit in Washington, officials shared practices that their agencies and businesses have established for responsible AI use – starting with an evaluation of organizational values to understand where the technology is needed. 

“When you start with the business value, then you can start thinking about, what’s the human impact?” said Seth Dobrin, a former CAIO at IBM.  

“The EU [European Union] is demanding that we control bias in these models,” he said, while adding, “that’s the wrong place to control the bias.” Rather, he said, “we need to control it in the outcomes, because that’s how we’re going to determine where these models are going to be deployed.” 

Identifying and mitigating bias in data and model training is a challenge that can be addressed through a value-mindset approach, panelists echoed.  

Before her current role as the digital transformation culture and communication lead at NASA, Krista Kinnard helped the Department of Labor evaluate an AI tool for assisting veterans with civilian job searches. Despite its potential, a tool was ultimately rejected due to biased outcomes, she said. 

“We did a nine-month evaluation going back and forth and back and forth with this company, and we actually brought in the technologists who were training their model,” said Kinnard. “Ultimately what we came up to was the values that the Department of Labor was embedding in their framework for ‘how do we want to be implementing artificial intelligence’ – we were not able to check those markers off with the evaluation of this tool.” 

Other methods panelists shared for establishing responsible AI practices include widespread education on ethical use; transparency on tradeoffs when choosing between AI models and their outcomes; adopting a shared definition of “responsible” AI and AI use; and establishing governance boards comprised of different roles across the agency or business.  

Governance boards can help address “big challenges” by setting “a tone” for enabling an organization to “deliver services” in “the way in which its most advantageous,” said Brian Peretti, the chief technology officer and deputy CAIO at the Department of Treasury. 

“The key for [establishing governance boards] is to be able to make sure that we bring together everybody across the treasury to be able to talk about these issues, to be able to make sure we’re understanding when we’re deploying AI [that] all the factors are considered,” said Peretti. 

“So, as we engage across the organization, understanding where there are challenges, where there may be privacy or rights impacting applications, that we look through it, that we think about why it’s being done, is it the right reason to do are we impacting someone in an improper way,” he continued.  

Guidance for the use of AI in Federal agencies released in March by the White House Office of Management and Budget outlined requirements for agencies to establish AI Governance Boards by May 27, 2024.  

Read More About
About
Weslan Hansen
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags