As the Pentagon rushes headlong into putting artificial intelligence into more of its operations, analyses, and weapons, it has said it doesn’t want to ignore caution flags about putting too much faith in the power of machines. The Electronic Frontier Foundation (EFF) wants to help with that, posting an array of those flags in a white paper that offers guidance to military use of AI and includes of list of its shortcomings and where it can go wrong.

In the paper, addressed primarily to military planners and defense contractors, EFF goes into detail about the current state of AI and makes recommendations on when and when not to use the technology. EFF, a nonprofit group focused mainly on civil liberties in an online world, noted the recent tumult at Google, in which some employees quit over the company’s involvement in the Department of Defense’s Project Maven. Google eventually bailed on the project, but acknowledges the inevitability of military AI. EFF said it wants to “bridge the gap” between proponents and opponents of military AI, white paper author Peter Eckersley wrote in an accompanying blog post.

“We are at a critical juncture,” Eckersley wrote in noting both the promise and the limitations of AI. “Machine learning technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and manipulation. They also lack the basic forms of common sense and judgment on which humans usually rely.”

The paper addresses what EFF calls three core questions: the technical and strategic risks of applying current machine learning to weapons systems or military command and control; appropriate responses to use of those technologies; and whether AI is safe for military use.

Among its red flags:

  • Machine learning systems can be easily fooled or subverted, with neural networks vulnerable to novel attacks such as adversarial examples, model stealing, and data poisoning.
  • Because the playing field in cybersecurity favors attackers over defenders, AI applications are likely to be running on insecure platforms.
  • Reinforcement learning– the approach used to train many current AI applications– creates systems that are unpredictable, hard to control, and unsuited to complex real-world deployment.
  • Those real-world situations are very complex and impossible to model, which could lead to catastrophic results, such as accidental commencement or escalation of conflicts. EFF calls this the greatest risk posed by AI.

The Department of Defense isn’t ignorant of the ethical concerns of using the technology. At a meeting last month to describe the mission of DoD’s newly formed Joint Artificial Intelligence Center, Brendan McCord, the Defense Innovation Unit’s (DIU) machine learning chief, said,  “our focus will include ethics, humanitarian considerations, long-term and short-term AI safety.”

As for one of the public’s biggest concerns about AI—killer robots—DoD has always had a rule that only an authorized human can give the order to fire, although EFF does point out that AI systems can recommend an attack, which could lead to mistakes if a system’s analysis or reasoning is faulty.

The white paper offers several recommendations for mitigating these risks, including focusing research and development on predictability and robustness; focusing on AI systems outside the “kill chain,” addressing logistics and defensive cybersecurity as well as on the battlefield; and sharing research with civilians and academia, as well as among the military services and other organizations.

DoD sees its pursuit of AI as a team sport, expecting to collaborate extensively with industry, academia, and other groups. And some of that collaboration can include outside advice and warnings on the limits of current AI systems and how they apply to military uses.

Read More About