Leaders in the Federal intelligence community, speaking last Tuesday at Defense One’s Tech Summit, said that their agencies are using vast stores of data, machine learning, and neural networks to go beyond simple fact-finding, to now radically re-envision how the intelligence community acts on credible information. But with these advances in technology come thorny new ethical and procedural questions, those officials said.

Predicting the Future

Sean Roche, associate deputy director for digital innovation at the Central Intelligence Agency (CIA), said the CIA is in a “unique moment” with regard to technology’s ability to solve a wide range of problems.

“It’s about a set of technologies, not necessarily a procurement model or anything else,” he said. “At least in my 37 years, I’ve never been this enthusiastic about what can happen as quickly as it can happen.”

Roche discussed how the agency is using emerging technology to enter a realm where the data no longer simply describes a situation, but actively provides insights on how to act on the given information and “actually optimize the decision process.”

“We’re kind of leaving the era of descriptive analytics,” he said. “We’re leaving the era of big data because, quite frankly, you use data to make decisions. If you’re not able to model data, the data is not that useful. So, descriptive analytics is good, but a little bit too much rear-view mirror. Where we have to get to is predictive analytics and prescriptive analytics.”

Human-Robot Partnership

Dr. Jason Matheny, director of the Intelligence Advanced Research Projects Activity (IARPA) in the Office of the Director of National Intelligence, said that his research outfit is working to determine the best sorts of predictive analysis available–whether that be human or machine-based.

“Right now, we’re running five research programs looking at combinations of human judgments and machine learning models,” he said. “One of the findings of that is that there are entire classes of events that we need to be especially humble about.”

One of IARPA’s programs, he said, involved upwards of 30,000 analysts making five million forecasts on geopolitical events. Matheny said that conventional human wisdom, as it turns out, is failing us when it comes to forecasting future events, particularly economic ones, where performance is “pitiful.” But IARPA is finding success in some areas, he said.

“It turns out that one of the early indicators of cyberattack planning by cyber actors is to trade exploits, these new pieces of malware, on black markets,” Matheny said. “We can actually monitor these market prices in order to get earlier lead time on when an attack is likely to occur.”

Even though the sales can’t be tracked to individual actors, they provide general foresight on when a large-scale attack is looming, he said. In the end, it’s human-machine collaboration that might lead to the biggest gains.

“Ultimately, we think that a combination of analytic tools, machine learning approaches, and human judgment are needed for most of the important intelligence questions,” Matheny said.

Data as Firepower

Natalie Laing, deputy director of operations at the National Security Agency (NSA), said data is now often used by bad actors in order to mount cyber offensives, creating new considerations for intelligence agencies regarding the data’s value and influence.

“Data has become the new raw material. It’s a strategic asset,” she said. “So, what we’re seeing is, [whereas] in the past when we would use data exploitation for gathering information and informing intelligence insights, data is actually now being weaponized, if you will, as a strategic asset, more for disruption and degrading.”

She gave an example of the Department of Defense Information Network (DoDIN), calling it a “massive target” for a wide range of adversaries. “We’ll see a vulnerability exposed and within 24 hours, you probably have a hundred thousand hits looking for that particular vulnerability,” she said. “It’s quick and it’s extensive.”

Looming Questions and Outlook

Roche expressed concerns that it is not explicitly clear how deep learning models and neural networks actually come up with the prescriptive answers they provide. Machines that defy human comprehension seem the stuff of science fiction, but it’s an issue the government is already taking seriously.

“The challenge for us and for everyone getting to prescriptive is that in the intelligence community you can never be a black box,” he said. “Artificial intelligence and any of this machine learning, deep learning, can never be something that spits out an answer that we can’t reach into, manipulate, and be held accountable for and explain how we got there.”

He said that this presents something of an unintended advantage. The need to be able to manipulate, train, and direct these machine learning models, Roche argues, will breed technical acumen in the intelligence community to better understand deep learning and ensure that agencies do not use technologies without firm knowledge of their inner workings. That’s cause for optimism, he said.

“The intelligence community and greater government since 9/11 has sized itself and actually wired itself as a mechanism that can solve worldwide, very dispersed problems that have many, many layers, very, very quickly,” Roche said. “I just see a tremendous opportunity to continue to do what our country needs to do, which is be effective across the widest range of challenges.”

Read More About
About
Joe Franco
Joe Franco
Joe Franco is a Program Manager, covering IT modernization, cyber, and government IT policy for MeriTalk.com.
Tags