Alexander Kott, chief scientist at the U.S. Army Research Laboratory, said Wednesday at the Defense Systems Summit that creating artificial intelligence (AI) and machine learning (ML) solutions for complex battlefield environments requires different prioritization than commercial solutions, and offered four tips for defense organizations looking to implement.

“The complexity of a real-world mission is often enormous,” he said. “You cannot take for granted that simply getting in-house a machine learning application and machine learning consultants will somehow sprinkle magic dust on your problem. It will not.”

Kott cautioned against falling into the trap of AI or ML buzzwords, trending technologies, or opting for the most convincing vendors. He provided four areas of emphasis that defense agencies should consider when “building mission-relevant applications of machine learning.”

Focus

“First, make sure you have the right focus,” Kott said. “To do that, one intellectual mechanism is to conceptualize your intelligent agents that perform your mission, with some idealized future.” This conceptualization refers to the actual AI product, in this case a machine-based “agent.”

Kott said conceptualizing will help to avoid vendor lock-in, adequately prioritize, and focus on the capabilities an AI solution offers and how it contributes to mission focus and desired future state.

Complexity

“Number two, recognize the complexity of the tasks within your mission,” he said. “Be realistic about extreme complexity and the gaps of machine learning in its current stages.”

He gave the example of the AI and computer vision used for self-driving cars. “They are optimized for a very orderly world, a world of fairly-well-marked highways, streets, rules, and so on.” That notion of order dissolves, he noted, when AI is applied in a “dystopic” combat environment that is both “extremely dynamic and extremely unpredictable.”

“It is a world of extreme complexity and it is not entirely suitable for today’s generation of AI,” the R&D scientist said.

Limitations

“Three, investigate whether your tools, your solutions, your vendors can deal with limitations of data,” he said. In particular, computer vision algorithms in combat scenarios may struggle with “dirty” images, full of noise and distortion, Kott said.

Thus, even with potentially millions of images, there is a risk of receiving little actionable information within those huge volumes of data. “You have to at least have a partial solution, at least a work-around, and if you don’t it’s going to bite you,” he said.

Dialogue

“Finally, investigate how you’re going to deal with human issues–of your users, of your trainers of machine learning algorithms,” Kott said.

He said agencies should seek “at least a ten percent solution” to enable AI handlers to both understand how learning algorithms are performing their tasks and “to provide feedback to the machine learning agent.”

Pivotal, he said, is establishing a concept of dialogue with automated platforms around their successes and needed refinements, as systems don’t grasp feedback loops in the same way humans do.

Offering the example of personal home assistants, Kott said, “If you repeat a question a few minutes later, it would have no clue it already told you about that. Dialogue is not like that. Dialogue is when you build a common knowledge, common intent around your mission task and you actually continue to develop discourse around that cumulative knowledge.”

Kott said there are currently no solutions for this, but there are work-arounds, and that vendors need to prioritize this new area of research to make human-machine partnership truly effective.

Read More About
Recent
More Topics
About
Joe Franco
Joe Franco
Joe Franco is a Program Manager, covering IT modernization, cyber, and government IT policy for MeriTalk.com.
Tags