Artificial intelligence (AI) systems are getting awfully good at the Who, What, When, Where and even How for a variety of jobs, from military operations to financial transactions to medical diagnosis and treatment. But the Why is another story.
In medicine, for instance, an AI system can take mere minutes to assess a patient’s condition, DNA, medical history and a broad range of past treatments to recommend a course of action. Elsewhere, AI is being used to inform loan decisions, insurance contracts, parole eligibility and hiring decisions. In the military, AI systems are being trained to identify objects of interest in surveillance images, as part of a larger project to use AI to analyze millions of hours of full-motion video.
The Department of Defense (DoD) and other organizations see this kind of human-machine teaming as vital to future operations. The problem, however, arises when machines are required to explain how they reached a certain conclusion. Currently they can’t describe, in human terms, the process of how a determination was made. As machine learning and deep learning systems increasingly teach themselves and become more autonomous, the need for machines to be able to debrief their human counterparts–to be more accountable–will become more important.
The answers to those questions could lie in the emerging research field of Explainable Artificial Intelligence, or XAI, which aims to create a two-way dialog with AI systems before they become too advanced and unfathomable.
“Explainable AI–especially explainable machine learning–will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners,” David Gunning, a Defense Advanced Research Projects Agency program manager, writes in describing DARPA’s current research into the subject.
DARPA wants to develop machine-learning techniques that, combined with human-computer interfaces, will allow human users to engage in some plain talk with machines. “New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future,” Gunning writes.
The research agency has awarded multimillion contracts for two elements of its XAI program, Common Ground Learning and Explanation (COGLE) and Causal Models to Explain Learning (CAMEL). PARC, a company owned by Xerox, will develop COGLE, which the company describes as a “highly interactive sense-making system” to establish a common ground between the ways human minds and machines work.
Charles River Analytics has a four-year, roughly $8 million contract for CAMEL, under which it will model deep learning systems and employ its Figaro probabilistic programming language to simplify explanations of how those machines work.
XAI also is being explored in other contexts. The National Institute of Mental Health, for example, is researching XAI as a way to identify the links between brain activity and complex behaviors.
Regardless of specific applications, for DoD and other organizations, it ultimately comes down to a matter of trust. If humans and cognitive machines are to work more closely together, they need to be able to understand and trust each other. That starts with being able to explain how they think.