As the impact of the coming artificial intelligence (AI) tech revolution is being hashed out at numerous levels of government, the Senate Intelligence Committee this week got its latest dose of input from private sector witnesses on one of its most important questions: how does AI affect national security?
At a committee hearing on Sept. 19, witnesses told senators how government, industry, and academia can responsibly develop and deploy AI to leverage benefits that the emerging technology has to offer, and suggested strategies for workforce make-up and development sharing that lead to better outcomes in the long run.
Benjamin Jensen, senior fellow at the CSIS think tank and professor at the Marine Corps University School of Advanced Warfighting, explained to lawmakers that AI will be a critical capability for the nation going forward and central to integrated deterrence campaigns and warfighting.
However, Jensen highlighted an “often-invisible center for gravity” for integrating new technologies, including AI – the people, the bureaucracy, and the data infrastructure necessary to turn any technology into a strategic advantage.
“Get the right people in place with permissive policies and provide them access to computational capabilities at scale and you gain a position of advantage in modern competition. Deny your adversaries the ability to similarly wage algorithmic warfare and you turn this advantage into enduring strategic asymmetry,” Jensen said.
He recommended that in addition to employing people who have a basic understanding of data science and coding, the intelligence community and the military need to have a “smaller, nimble information age bureaucracy [with] open experimentation in place” to better leverage AI.
“They need reliable access to data centers to continually train and update machine learning models against adversaries. Failing to protect these requirements risks ceding the initiative to our adversaries,” Jensen said.
Yann LeCun, vice president and chief AI scientist for Meta Platforms and professor of Computer Science and Data Science at New York University, echoed Jensen’s comment on access to AI tech being a critical defining issue that government, industry, and academia need to address as they plan any AI initiative.
“AI has progressed leaps and bounds … We’ve seen first-hand how making AI models available to researchers can reap enormous benefits,” LeCun said. “Having access to state-of-the-art AI will be an increasingly important driver of opportunity in the future for individuals, for companies, and economies as a whole.”
LeCun also explained that another issue that government, industry, and academia need to address is safety.
“The current generation of AI tools is different from anything we’ve had before, and it’s important not to undervalue the far-reaching potential opportunities they present. However, like any new disruptive technology, advancements in AI are bound to make people uneasy,” LeCun said.
He explained that as the industry continues to develop AI technologies, they should make sure tools are built and deployed responsibly. Furthermore, LeCun explained that policymakers, academics, civil society, and industry should work together to maximize the potential benefits and minimize the potential risks of AI.
LeCun recommended that one way to start to address both issues is through the open sharing of current technologies.
“It’s better if AI is developed openly, rather than behind closed doors by a handful of companies … Companies should collaborate across industry, academia, government, and civil society to help ensure that such technologies are developed responsibly and with openness to minimize the potential risks and maximize the potential benefits,” LeCun said.